On a sunny day (Wed, 15 Apr 2015 11:39:39 +0200) it happened David Brown wrote in :
That is an impressive list.
6502.. hey I still have a 68000 processor in a box somewhere! Problem with C is, at least for some cases, that it hides the hardware. I mean when I wrote code for a Raspberry, in C, I really do not know what it does and what registers it uses, and how, except for the specific I/O parts where I have to go to the processor datasheet. But I trust gcc. But that does not mean that by definition gcc is optimal. And no debugging staring at registers, as that wastes time and does not help either. I once did some graphics card programming and yes at that point it all became time critical asm... Hey I even did my own graphics card in hardware. So YMMV.
The other thing with C is the typing, I mean I got funny remarks early times when I thought 'int' was 16 bits (coming from C/80 CP/M compiler, and writing some code for the PC). So that can really get you into trouble porting code from one system to an other. Say simple headers for files, so use uint32_t blah blah or whatever. For the rest C is transparent and hides the processor architecture. I do not like all those defines in embedded C for micros makes it unreadable. Then you may as well write in asm and take control, plenty of good libraries around. And much what is done if floating point can just as well or better be done in 32 bit arithmetic. Look at gm_pic2
formatting link
parsing NMEA Just to go back to PIC. Don't you like goto :-)
But in case you missed it, my point is that while a Raspberry Pi is a useful device, it does not replace other devices. It is another tool in the toolbox, that's all.
I have never used a "normal" M68K processor like the 68000 itself - but I have many systems using the 68332 and 68376 whose core is a variant of the 68020.
That can be an advantage and a disadvantage. Mostly it's an advantage - it means you can move code from one system to another, and re-use code in a way that is impossible in assembly. It also means you don't have to think about all the low-level details all the time.
But it can be a disadvantage, especially when people used to programming C on big systems (like PC's) work on small devices without understanding how the generated code will work.
When I start working with a new processor, I like to study the cpu architecture and assembly code. If I have the time, I like to write some assembly code too - but I have enough experience to get a decent understanding just from reading the reference manuals. This means that when I write C code for the device, I have a good understanding of what is going on at the low level, and I can read and understand the assembly listings or debugger steps. And occasionally I write small parts of the code in assembly, if there is good reason to do so.
Yes. gcc is not bug-free, but it's rare that a problem is caused by the compiler rather than the user code!
/Usually/ that's the case. Different kinds of debugging problems can be solved with different tools. "printf" debugging is one tool, "gdb" debugging is another one. It's good to have several options available.
My earliest assembly programming was on a ZX Spectrum (Z80A processor), and the key debugging tool was changes in the sounds from the voltage regulator!
I use fixed size types (uint32_t, etc.) for almost all purposes. I don't think I have ever had an issue in my own code caused by changes in the size of the standard C types - but I have seen plenty in other code.
There are other target-varying issues. Endianness can be different between targets, but people often assume a specific endianness for the code they are writing. For some devices (mostly DSP's and dinosaur mainframes), a "char" can be more than 8 bits long. Theoretically, signed integers might not be two's complement - but that is only for dinosaurs.
And when you are using brain-dead processors like the PIC, 8051, COP8, etc., you need to use a lot of compiler and target specific extensions if you want decent code generation. The advantage of the AVR compared to other 8-bit devices is that this is /almost/ eliminated (the only exception is for accessing data in flash) - with the msp430 (16-bit) or
32-bit devices (ARM, 68K, PPC, etc.), you can write standard C for everything that is not interacting with the hardware (such as interrupts).
Code can be made unreadable in any language :-)
No, there are not many good assembly libraries - not compared to C.
That is often the case - but the 32-bit integer arithmetic is vastly easier to do in C than 8-bit assembly.
My old boss, who was an avid x86 asm prorammer in his free and not so free time, actually we were close to IBM, had a lot of asm libraries for that processor.... I learned from that and asm routines that work I use over and over again.
That reminds me, I have libc.info, all the files unzipped into one large file as reference in the home directory, could not write any code in Linux in C without it.
Some of the TI chips (Sitara) are quite interesting- as well as an A8 processor running at 1GHz-ish, they have a couple axilliary 32-bit CPUs for real-time processing that hum along at 200MHz. If you're running Linux on the A8 you can do real time without mucking about.
Then there's the Zynq FPGA + hard ARM cores on-a-chip approach to SOCs.
The software for the Zynq, Vivado or something, is a nightmare. We've done a couple of Zynq projects, but will probably cut over to Altera in the future, now that they have their SoC things going.
--
John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers
jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
On a sunny day (Wed, 15 Apr 2015 12:14:49 -0700 (PDT)) it happened " snipped-for-privacy@krl.org" wrote in :
Dan
It is obvious that Raspi triggers some effects in the world of embedded, and now one after the other tries to mimic it. Beaglebone, and this. That is a good thing. I prefer Debian (Ubuntu is also Debian based, but different). The free codec of Odroid is cool (Raspi should make it free too), but just the idea of having it anything to do with android makes me evade it. Does not even make sense, and android sucks big time (yes I have one). Odroid's less GPIO is bad, the ADC does not matter, hang a PIC on it for 2$. You can never have enough I/O pins. One core, quad core, what do we all do with it? if you just steer a robot... I could use an extra core in DVB-S encoding, but it is working on one. Fast graphics matters, and indeed Raspi USB sucks, so Odroid could be better, but need to see it first. I mean Raspi in X (version B I have) is _s l o w_.
Fast gigabit ethernet? well I am not sure the LAN here is up to it ;-) Not even the switch, its from year 2000 or so. How fast is your net connection, what do you need 1 gigabit for? And I think I measured 450 mA, not the 800 mA you quote for my version B.
I'd forgotten about that. The AVR does many instructions in a single clock cycle (two cycles if accessing memory, and a few more for change-of-flow instructions).
My question is: how do AVRs only take 1 clock cycle where PICs take 4? PICs seem to operate in a well defined sequential manner, and I've always been nervous about AVRs, given that I often operate well beyond the rated temperature.
This is all from memory, so the details may be fuzzy...
The PICs (PIC16 at least - I know little about the PIC18) use a non-pipelined 4-step sequence. Fetch, interpret, execute, write-back or something along those lines. 4 cycles per instruction cycle. Most instructions will be one instruction cycle, but some (typically changes of flow) might take more.
The AVRs have two clocks per instruction cycle - fetch and interpret, execute. These are pipelined, so that an instruction overlaps with the next one (which is why changes of flow need an extra cycle). Reads or writes to memory need an extra cycle - but since the cpu has 32 registers, many operations are register-to-register.
I know that PIC's can often operate beyond their rated temperatures. I haven't tried it with AVRs, so I can't say (although Atmel do make 150C rated chips). I can't see any reason why AVR's would be a problem at higher temperatures (following the usual practices - lower voltage, lower clock speed, don't write to flash, don't expect wonders from the ADC, etc.).
Pipelining I believe, plus using an instruction set orientated towards register operations (which works because there are plenty of registers).
It is still verging on obselete for new projects IMO. It would have to be some really low power tiny system to make me choose one, now, rather than a cortex M0 part.
There are plenty of small cpus where an instruction cycle takes multiple clock cycles. 8051 cpus used to take 12 clock cycles per instruction cycle, though more "modern" implementations are typically 4 clocks per instruction cycle.
One of the key guiding principles of RISC design was to keep the instructions simple in order to get one instruction per cycle (with pipelining).
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.