why do computer scientists say 1KB=1024 bytes?!!

...

That's with KDE running. I've stopped it, and started Fluxbox, and now I get:

$ free total used free shared buffers cached Mem: 312860 151652 161208 0 30844 66484

-/+ buffers/cache: 54324 258536 Swap: 1493960 0 1493960 richgrise@thunderbird:~

Before I started "Pan", and

$ free total used free shared buffers cached Mem: 312860 306412 6448 0 10284 123536

-/+ buffers/cache: 172592 140268 Swap: 1493960 12 1493948 richgrise@thunderbird:~ $

now.

Not that it makes any difference to anybody.

Just for the sake of insufferable pedantry:

$ ps ax PID TTY STAT TIME COMMAND 1 ? S 0:03 init [3] 2 ? S 0:00 [keventd] 3 ? SN 0:00 [ksoftirqd_CPU0] 4 ? S 0:00 [kswapd] 5 ? S 0:00 [bdflush] 6 ? S 0:00 [kupdated] 10 ? S< 0:00 [mdrecoveryd] 11 ? S 0:00 [kreiserfsd] 41 ? S

Reply to
Rich Grise
Loading thread data ...

Yes, it bothers me exactly 20.6/300.

Or so.

;-P

Thanks! Rich

Reply to
Rich Grise, but drunk

Yeah, I didn't even know VAXes f***ed!

;-P Rich

Reply to
Rich Grise, but drunk

Thank the Universe for them!!!!

If women were in charge, wars would happen only once a month, and then only last for a week.

Maybe we should elect somebody like Sally Jesse Rafael for president. ;-P

Cheeers! Rich

Reply to
Rich Grise, but drunk

I got a 60 mA iron teletype once that some guy would otherwise have tossed. I actually built a 60 mA loop, and ran it off my 8008. (It was kind of a PITA, because I hadn't written the monitor where I could use the keyboard, so I had to enter _everything_ by way of the three pushbuttons (not debounced) and eight toggle switches on the front panel.)

But I learned, in spades: Do NOT use WD-40 on a teletype! It turns to goo when the solvent evaporates. WD-40 is for door hinges and squeaky ball joints and tie rod ends. I spent the next two weeks with a typewriter brush and can of MEK or xylene or trichlorethylene or something cleaning the goo off the precision parts.

And then, it was just fascinating to watch it mechanically deserialize the 300 Baud (or whatever it was - maybe 110) data. :-)

I also made that 8008 play the melody of "Daisy, Daisy." ;-P

Thanks! Rich

Reply to
Rich Grise, but drunk

What? No Friden Flexowriter? ;-D ;-D ;-D

Cheers! Rich

Reply to
Rich Grise, but drunk

"Exeutrix?" BAH, are you a girl? (I was an executor a couple of times, and I'm a boy, or at least I have all boy parts.)

But, back to the point, I agree with Mr. Shakespeare: "First, we kill all of the lawyers"...

Cheers! Rich

Reply to
Rich Grise, but drunk

You can have my banana when you pry it from my cold, dead fingers.

;-P Cheers! Rich

Reply to
Rich Grise, but drunk

And let us not forget the ever-popular "half-carry" flag, and the "adjust decimal" (or whatever the fxx it was) instruction. ;-)

Cheers! Rich

Reply to
Rich Grise, but drunk

Hi Kleuskes_Moos, You asked me:

How many programs do you know that require you to use binary, octal or hexadecimal numbers ? None ?

Well, have you ever used Win_XP's: Right_Click --> View --> Details ? All the sizes are in KB and MB. Right_Click --> Propeties shows things like: 32.0 KB ( 32,768 bytes )

Reply to
Jeff_Relf

Hi Happy_Madman and Rex_Ballard, Happy_Madman asked Ballard:

I thought a 'byte' is an 'unsigned char'. Is this definition flawed ?

I use Microsoft's VS_2005/VC_8, and a byte is just 8 bits, signed or not, and it's the smallest addressable integer, but, if you own an AMD64, you can't fetch anything smaller than a 64 bit integer.

A character could be Unicode, 16 bits, like my code that uses SimSun or MS_Mincho, Mainland_Chinese, or it could be 7_bit ANSI, like Outlook, a Usenet client.

In VC_8, _char_ is a keyword that defines an 8 bits integer, it's signed by default, but a compiler switch will make it unsigned. VC_8's _int_ and _long_ are always 32 bits, even on an AMD64.

Reply to
Jeff_Relf

The Prophet , known to the wise as snipped-for-privacy@yahoo.com, opened the Book of Words, and read unto the people:

Yes. A byte is a sequence of 8 bits. It's a low-level representation of memory units. It's not "signed" or "unsigned" because it's merely a representation of 8 consecutive binary states. It doesn't even need to be interpreted numerically, although most modern architectures do so somewhere in processing it.

An 'unsigned char' (in the C-centric world of datatypes) is a high-level representation of a data-type which is a "small integer" -- traditionally, with enough possible values to represent every character in the system's standard character set. From an implementation standpoint, this means that an unsigned char is a _usually_ an 8-bit unsigned numeric value, but depending on the architecture, may be more (I'm not sure it's ever been less, but it certainly shouldn't on a modern system).

So in practice, a char is frequently a byte in size, but there's a valid distinction to be made between an implementation-level unit of data and an interface-level abstraction of a data unit.

--
     D. Jacob (Jake) Wildstrom, Math monkey and freelance thinker

"A mathematician is a device for turning coffee into theorems."
  -Alfred Renyi

The opinions expressed herein are not necessarily endorsed by the
University of California or math department thereof.
Reply to
Jake Wildstrom

{...}

Have you worked out your technique? I just tried raising and lowering fingers, and the carpal tunnel is still complaining.

--
Tom Hardy    rhardy@visi.com    http://www.visi.com/~rhardy
  Just don\'t create a file called -rf. --Larry Wall
Reply to
Tom Hardy

Don't confuse what a byte is with the syntax or semantics of a programming language.

Yes. A byte is a consecutive string of bits. It could be either narrower than or wider than a character.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT  

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org
Reply to
Shmuel (Seymour J.) Metz

It's also a sequence of 2 bits, 4 bits, 7 bits or whatever size you would like it to be. For an example of correct usage see the instruction set of the DEC PDP-6 or the IBM 7030.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT  

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org
Reply to
Shmuel (Seymour J.) Metz

My wife is an executrix.

John

Reply to
John Larkin

Correction: He earned all JMF's money.

My wife was very fortunate with her mother's estate. She didn't even have to go to court. IIRC she hired a lawyer to (officially) tell her she didn't need one. ;-)

--
  Keith
Reply to
Keith Williams

Nope. I'd still be trying to close probate if I hadn't had the lawyer. He earned his money.

/BAH

Reply to
jmfbahciv

NO! This is impossible since not even bit gods can predict the past nor the future. One of the reasons there are network protocols is so this information did not have to be hard-coded. It's a law of OS computing that, as soon as you assume a field size, it will change.

Now, granted there is^Wwas the problem of very little memory and storage available so some tradeoffs had to be done. One of the bad things that TOPS-10 had was field definitions that could not be expanded. When you're dealing with unknown nodes out in the networks, you cannot assume byte sizes nor much of anything else. You have to be able to build into the code extensibility that was done by code you don't control, you don't know exists, and may never be able to test.

Exactly. That's why you cannot make the constraint that both archiectures have to be known.

Sigh! I'm trying to prevent a mess. You have detected that these kids thing there are 8, and only 8, bits in a byte? What are they going to do when it goes to 24? I'll tell you, they'll barf their bits all over the disk.

Please note that the OP assumes that, if a Greek character is being used (or, rather, its associated noun), he is assuming SI!!!!

/BAH

/BAH

Reply to
jmfbahciv

Peter wrote:

And COBOL, primarily. PL/1 and COBOL are two of the very few languages that directly support decimal data types (a.k.a. BCD, packed decimal, and zoned decimal).

Which of course explains why IBM used it to begin with, because it does not suffer from the rounding errors that arise when converting from binary. Actually, the origin of BCD and its cousin EBCDIC can be traced to Hollerith punched cards, which were decimal-based.

Reply to
David R Tribble

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.