Getting started with AVR and C

I thought they had guaranteed they would never go above U+10FFFF, which would break UTF-16.

AIUI, there are some DSPs with CHAR_BIT==24 (or was that 12?).

S
--
Stephen Sprunk         "God does not play dice."  --Albert Einstein 
CCIE #3723         "God is an inveterate gambler, and He throws the 
K5SSS        dice at every possible opportunity." --Stephen Hawking
Reply to
Stephen Sprunk
Loading thread data ...

Sorry I didn't interpret things well.

I've needed writable code space. Thunking is one such example.

While I agree with the "generally" I don't agree that this translates into 100%.

Indeed. Completely agreed.

Jon

Reply to
Jon Kirwan

I recall reading some posts in this newsgroup a long time ago, which claimed that under certain circumstances, that it was possible in C99, for unsigned int to promote to type signed int.

But that was never the case.

In C99

6.3.1.1 paragraph 2, read as "less than" instead of "less than or equal" as you have above; and unsigned int type was covered by "All other types" in the last sentence.
--
pete
Reply to
pete

[...]

That may well be true; I never used a Cray-1. (And there was more emphasis on Fortran, or should I say FORTRAN, than on C.)

By the time I started using Crays, they were running Unicos, Cray's version of Unix, so they pretty much had to have CHAR_BIT==8.

--
Keith Thompson (The_Other_Keith) kst-u@mib.org   
    Will write code for food. 
"We must do something.  This is something.  Therefore, we must do this." 
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
Reply to
Keith Thompson

You're right. says:

Both Unicode and ISO 10646 have policies in place that formally limit future code assignment to the integer range that can be expressed with current UTF-16 (0 to 1,114,111).

--
Keith Thompson (The_Other_Keith) kst-u@mib.org   
    Will write code for food. 
"We must do something.  This is something.  Therefore, we must do this." 
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
Reply to
Keith Thompson

An unsigned type whose entire range can be represented by an int will promote to signed int, as can easily be confirmed by checking the above text, and that point has been raised in this group - there were several threads that touched on that subject in just this past summer. However, anyone who claimed that it could happen to "unsigned int" was mistaken. That clause explicitly applies only to types "other than int or unsigned int".

n1256.pdf (which is C99 with all three TCs applied, making it MORE useful than C99 itself) and n1570.pdf (which is essentially identical to C2011) both have "less than or equal to". The line is marked as being changed from C99 in n1256.pdf, implying that one of the TCs is the reason. My copy of C99 itself is inaccessible right now, so I can't confirm the nature of the change.

--
James Kuyper
Reply to
James Kuyper

Yes, that is precisely it. The AVRs especially tended to have lots of flash but little RAM. Access to program memory is possible on the AVR, but you have to use special attribute modifiers everywhere and the resulting objects become incompatible with the standard libraries, so you have to write special versions of these...

Another thing is that, being an 8 bit machine, int and short operations are not atomic. So you have to be very careful about protecting variables shared with interrupt handlers (or other tasks in a preemptive system). Good practice anyway of course but a modern CPU like Cortex M3 is a lot more forgiving since even 32 bit load/store operations are atomic.

[...]
--

John Devereux
Reply to
John Devereux

It was TC2 and the change came from DR 230. It was to handle the case of enumerationed types with the same rank as int, it didn't have anything to do with unsigned int.

--
Larry Jones 

I'm a genius. -- Calvin
Reply to
lawrence.jones

For Unix, serial I/O was as important as efficient storage of data. Most serial terminal can't do more than 8 bits, and usually 7E or 7O. So, 8 bit char became standard.

Reply to
me

Given the cost of memory back then, primary or secondary, a great many man-hours were spent on efficient storage. Serial I/O was almost exclusively used because of how modems worked, then, for transmission over long distances. (Some may argue that it requires fewer wires, too, in cables. But that was less an issue then -- witness the 36-pin and 25-pin Centronix cables/connectors which were very wire-heavy.) It turns out that terminals, like the ASR-33 and KSR-35, were often used without a computer for dial-up modem use over a phone line. So they used a serial interface, by design. Which meant that Unix needed to cope with it. But I wouldn't say "as important as." I worked on the v6 Unix kernel, so I was slightly aware of the situation.

Of course, I was just bringing up the PDP-10 because of its odd way of packing 7-bit codes into a 36-bit word.

At the time, there was no real standard at all. I saw equal numbers of machines using EBCDIC and 6-bit (5-bit Boudot was waning by this time but I also remember old terminals that used 5-bit) and 7-bit. No machine used 8-bit for anything, then. The 8th bit was always just looked at as either 'don't punch it at all, so the paper tape is more durable' or else make it even or odd parity. Some of us would write programs to punch out visible English messages on the tape, which was one of the few reasons we actually wanted control over 8 bits (for those paper punch machines that punched 8.) I honestly hoped, but didn't know, if ASCII would win out in the end. I almost had a feeling then that I'd be converting from one code to another the rest of my life, if things continued as they were. I wanted ASCII to win, though.

Side note: there was only a gradual "coming together" on the idea that an 8-bit byte was a "good idea." I think a lot of people these days imagine that it was always as obvious and as ubiquitous as it is today. But that's not entirely true. Things went to 8-bit, gradually. Partly, because 8 bits is a nice 2^3 power thing and partly because ASCII was gradually taking over as a standard and would fit into a 8-bit byte, nicely. There was a confluence of forces going on and this kind of "precipitated out" to what it is today.

Side note again: Recently, I read a "personal history" talking about the complexity of the ASR-33. The author has no idea. I also remember quite well the much more complicated KSR-35. I worked on repairing both, from time to time. By comparison, the ASR-33 was a toy, designed for less lifetime and less complex, as well. The earlier KSR-35 was made for men, so to speak -- extremely well lubricated system with real man-parts and not toy pieces. The ASR-33 had a cute little cylinder with the letters on it, not that unlike the typewriter ball. The KSR-35 had a large hammer block, instead.

Jon

Reply to
Jon Kirwan

But one (perhaps unwritten) goal was for uniform files and I/Os. So, the lowest denominator (8 bits) is necessary.

Reply to
me

Yes. Odd. But compact; for data processing you can see their reasons. Elsewhere, like the GE Datanet-30 communication proccessor, ASCII in a nine- bit byte meant you could stuff a start bit, 7 data bits and parity into one byte-wide buffer register. Shifting in ones as the data bits shifted out meant that (unless somebody intervened by loading another character) you'd be left with 0777 after the character was transmitted, and thus you'd be driving an idle line as long as needed.

Mel.

Reply to
Mel Wilson

No. It could have been, had this been an IBM machine, that the lowest common denominator was something else. The 8-bit choice was a confluence of many factors, not just a simple deduction that was "obvious." I know. Because I remember wondering, and having debates, about where this would wind up in the end. It wasn't "necessary." If it was, we'd all have seen it that way at the time. And we didn't.

Jon

Reply to
Jon Kirwan

Thanks for this note, Mel!

Jon

Reply to
Jon Kirwan

If you think about a mechanical typewriter or line printer, how on earth would you fit a large character set into it ?

Being able to print numbers was of course the most needed feature, printing a very limited set of (upper case) letters was a bonus and some special characters like + or - would also be a bonus.

The English language can be represented with only a few letters, but even most Latin based languages needed some extra characters and these were taken in ISO 646 from the special character sets, thus square brackets were replaced by national variants.

On a line printer you typically had a 64 (upper case only) or 96 (with lower case) character drums and you had to order a drum made for your non-anglosaxon country.

Baudot 5 bit was of course popular due to the huge number of Telex terminals that existed in those days. Manchester Mark I used Baudot for this reason.

Later on 36 bits was quite common, capable of storing six 6 bit (upper case only) characters or nine 4 bit BCD decimal digits (sufficient for storing non-inflated dollar values in a single memory word.

If you forgot to immediately mark your paper tapes with a pen or some easily recognizable punch figures, you were in great troubles :-)

However 5x7 upper case characters was the norm for dot matrix displays, so I think 7 controllable bits would be enough ?

Computers were originally exclusively used for numerical calculations and because of the extremely costly division operation in converting a true binary value into human printable decimal digits, many computers used decimal digits internally represented as 4 bit BCD numbers internally (ENIAC was a true decimal machine with tubes with ten active electrodes).

Thus 4 bit is as natural as 8 bits, both power of 2.

Later on, when the prices came down and it made sense to use larger memory and processor widths, the ability to store nine BCD digits into a 36 bit word made this architecture popular for financial transactions at that time.

As the memory prices came down, people also wanted to store texts in computer memory. Storing six 6 bit characters was quite obvious.

However, IBM in their 360 (all around) series chose to use two 4 bit BCD numbers to store one character and hence the popularity of the 8 bit byte.

After all these years, I hove always wondered, how such excellent examples of mechanical engineering could work more than an hour without maintenance.

These were completely mechanical (including serial to parallel and parallel to serial conversion) and the only semiconductors were the power supply rectifier diodes and the transistor in the 20 mA constant current supply :-).

Reply to
upsidedown

We wanted to be able to UNDERLINE stuff!! Very important! Or to get a better lowercase feeling you used that line for a little bit of a descender-feeling.

A second thing I recall was the battle between hex and octal. But once the 8-bit thing caught on... octal was doomed. I still use octal for one reason... I have a very simple method of converting back and forth between octal and decimal that I do on paper. I've posted it before. But it is a lost art for most. Still, I find it more practical than using hex, when converting by hand.

I still remember the KSR-35 as an absolute marvel of mechanical engineering. The person who designed that thing (and it could NOT have been designed by committee) must have locked themselves up in an asylum, after. I don't mean that in a bad way -- it was stunningly well executed. And it was also incredible to watch in operation.

They worked quite well, I think. I worked on some problems just twice over the few years I used it. And it was never something fundamental about the design. It wore evenly and well. All of it balanced like a fine watch.

That said, other things I worked on around that time didn't work so well. Ever see an 8k drum drive operating on a PDP-8? Mostly didn't operate, darned things. They were big and pointless. The only thing they had going for them was the .. I think .. 96 read heads or something like that. No seek time, just rotation delay time and then read time. Tried to use them, but the supposed benefits didn't seem worth it, given the down time I saw.

Yup.

Damn. Now I need to see if anyone has one to sell. Crap. My wife is going to hate it. Luckily, I've got over 7000 sq ft of space here, so I should be able to find some corner for it. If I can find one. Would be fun to clean one up and get it working, if the problems weren't too fundamental after all these years.

My eyes are probably bigger than my stomach, though.

Jon

Reply to
Jon Kirwan

Line printers were a varied lot. A popular design, at least for higher end models, used a print train or chain with type slugs moving around the train. On the popular 1403 printer, for example, the train had 240 positions, which could, at least theoretically, hold that many different characters, although that would require a custom train. The standard 1403 trains ("TN") with the largest variety of characters had two complete copies of 120 different characters. These were rather slow trains, and tended to be used only for text-heavy output, and included, amongst other things, superscript digits, and a variety of mathematical symbols). More commonly trains with more copies of fewer characters were used, the idea being that a slug with the appropriate character had to rotate past every print position on the line before the printer could advance to the next line.

I actually saw (custom) trains with over 200 different characters, which included italics and other funky things. These were very slow?

Many trains had a mix of frequencies - "PCS-AN", for example, with four copies of most characters, two copies of some unusual characters, and eight copies of common characters such as digits. This would allow print speeds (1403-3) of 1385, 920 or 550 lines-per-minute, depending on what characters were on a given line.

The fastest trains had a few as 42 characters ("YN"), but six copies of 39 of those (and two copies of the other three). Print speeds were maximized when avoiding those last three characters. The fast trains could often sustain the full (or nearly the full) 1400LPM of the printers, while a TN train would usually chug along at about a third that (~550LPM).

Probably more than anyone wanted to know about line printers, but...

Reply to
Robert Wessel

Up until very recently (this summer) I had quite a bit of old computer stuff. I have sold or given away much of it now, which helps assure domestic tranquility. The 9-track tape drives were about as big as a refrigerator, but heavier... Still have an ADM-3A terminal and other

70s era computing ephemera, but I let the most bulky stuff go, so I still have the tape drive manuals and the PDP-8 manuals but not the devices.

Someone should write a VR TTY simulator that includes realistic sound. Here's a video clip of a model 28 in action:

formatting link

Ed

Reply to
Ed Beroset

This was interesting reading, since I have only used and maintained drum printers.

Those chain printers must have been quite expensive with switchable "fonts". Did the printer controller have an actual CPU with local core memory or was it hard wired with standard RTL/DTL/TTL logic chips ?

Reply to
upsidedown

Something like a 1403 needed a separate control unit (a 2821) to attach to a S/360 channel. The 2821 was a combination of device specific hardware* and a microsequenced control unit. They included a bit of memory necessary for performing their function (a record or two worth of buffer, and, in the case of printer, a buffer containing the image of the print train).

Later printers had actual processors in their control units.

The trains were not terribly expensive, several hundred dollars for the standard ones (although inflation will make that rather more in today's dollars).

A decent picture:

formatting link

You can see an installed train at:

formatting link

The end of the train is sticking out from under the ink ribbon wrapped around the left side (as viewed) of the gate. The gate is the part in the right foreground of the picture, swung out from the printer. Just below and to the right of the corner of the train is the rather industrial motor that drove the train. The print hammers were behind the paper, and smacked the paper into the ribbon and slug as the slug passed the appropriate position.

Changing the train was fairly simple, open the gate, remove the ribbon, flip up to two handles on the train (as seen on the first photo), and lift it out - then reverse the process for the new train. You'd then have to load a new mapping buffer into the control unit, usually from the spooler on the OS driving the printer (typically that would be semi-automated - the print job would be written to the spool specifying the required train, when the spooler printed it, and the wrong train was installed on the printer, he'd stop and tell the operator to mount the new train, and when the operator acknowledged doing that, he'd download the new mapping buffer).

Somewhat oddly, the second photograph shows already printed paper being fed through the printer.

*2821s could control a number of printers, card readers and punches, you had to order the right hardware (for example a model one could support one 1403 printer, and one 2540 reader/punch, a model three could support three 1403 printers).
Reply to
Robert Wessel

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.