really tiny uP

If unconventional tools don't scare you:

formatting link
Numerous examples here:
formatting link

--
This is the first day of the end of your life. 
It may not kill you, but it does make your weaker. 
If you can't beat them, too bad. 
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Reply to
none
Loading thread data ...

You've got already STM32 toolchain and some of the peripherals might also be close to what STM32G0-series is using. I've understood your volumes are not that high to grant using time to use NRE to optimize for

8 bit cost, so I'd look carefully at STM32G0-series.
--
mikko
Reply to
Mikko OH2HVJ

gcc is highly optimized and the AVR 8-bit architecture was designed with efficiency of HLLs in mind, with 32 general-purpose registers.

C++ is a fine language for it. For any given algorithm it's very hard to beat the compiler using assembler with respect to speed or code size nowadays (whichever you choose to ask the compiler to optimize for), it really does know the target better than you do.

Reply to
bitrex

Hi, John:-

That layout is a thing of beauty, as usual.

EOL is the bane of existence for industrial designs. At least the ARM architecture is persistent. Microchip has been pretty good, but not sure they are going to as enthusiastically continue SAM development. They indicated a PWM with fine resolution (~150ps competitive with TI et al) was coming real soon a couple years ago. That's outlasted the application, and in fact the company that required it.

Best regards, Spehro Pefhany

Reply to
speff

I try to guess (since I don't know the project details here) what might be relevant in the particular case. Code size is rarely particularly relevant, once you meet the basic requirements (fits in the chip). For a given piece of code, compiled with a reasonable optimising compiler, the size of the binary varies very little from cpu to cpu. If you are not doing much in the way of arithmetic (especially no 32-bit or bigger integers, and no floating point), then you are unlikely to see more than a factor of two spread in the binary size over a range of 8-bit, 16-bit and 32-bit devices. Once you include arithmetic (and the OP is doing maths on the chips) and bigger data sizes, the difference can increase more dramatically - most 8-bit devices will make significantly bigger code. The AVR doesn't suffer quite as badly here as most CISC 8-bit devices, since it has lots of registers, but it still sees the same effect.

And as I mentioned above - look at the memory sizes you get on a range of cheap Cortex-M microcontrollers. They are going to be several times bigger than the flash sizes you would get on a cheap 8-bit CISC device. (This comes from the economics of die sizes and the feature size used in making the chip.)

Speeds, on the other hand, span many orders of magnitude. (So do prices, package sizes and peripherals - making them vastly more relevant issues for choosing the the best device than code size.)

The OP has specific requirements about doing calculations at speed. That makes speed relevant. He has - sensibly enough - absolutely no requirements about code size, making it irrelevant.

Often speed is not a relevant issue either - but it /is/ if you need to do a range of calculations every millisecond.

Certainly some problems can be done on 8-bit devices. /Comfort/ is somewhat subjective - I have found some tasks on 8-bit devices to be frustratingly and irritatingly inefficient, even though they ran well within the time requirements. Similarly, faffing around with "__flash" keywords, bank switching, limited tools, idiot licensing (not applicable to SDCC, of course) and all the rest that comes with 8-bitters doesn't mean that the chip can't do the job - but it certainly affects the comfort of the developer.

It is a common issue with these types of devices. Again, it partly comes down to the processes used to make the dies - SRAM cells are expensive in the process sizes used on 8-bit devices, and much cheaper on the smaller processes typically used on 32-bit cores. But it is also often a limitation of the core design, which may not be capable of addressing larger memories well.

Code size is irrelevant. Code density is irrelevant. The only part that matters is the code size /relative/ to the flash size. If your program fits in the device, it doesn't matter if you use 20% of the flash or 80% of the flash - it only matters if you need 120% of the flash.

But people do regularly try to do too much with small devices. They get used to a particular device or family, and keep using it - and they keep using it well after the point when they know they should have switched to something bigger. (I'm as guilty of this as anyone.)

I do agree that processor speed is often not an issue - a good deal of what is done in microcontrollers does not need fast processing. But some things do - and the OP wants calculations running at speed. Also note that doing processing faster can mean finishing sooner and sleeping more, reducing power - which is sometimes highly relevant, and sometimes highly irrelevant.

I haven't used the STM8, no - I have been generalising about the class of 8-bit CISC devices (of which I have used a fair number, in C and assembly). Details vary, of course - there are some quite powerful

8-bit CISC devices (the Z80, for example, though it is sometimes classed as 8/16-bit rather than pure 8-bit).

It does sound like the STM8 is not as limited as I thought, so I am happy for the lesson here.

I still would not consider it for a new project, without overwhelming requirements. The world has moved to 32-bit cores, for many good reasons. A balance of many factors might mean the right microcontroller will happen to be a legacy 8-bit core, but you would never think "Do you know what would make this Cortex-M device nicer? Swapping the core for an STM8 core."

Reply to
David Brown

The Z80 is a particularly powerful 8-bit device with strong 16-bit features (arguably, it is an intermediate 8/16-bit device). It is a much better fit for C than most 8-bit devices.

The kind of things that get in the way of writing C on a cpu are poor support for 16-bit (the minimum size of an "int" and thus prevalent throughout C code), non-linear addressing and bank switching (often needing a mass of extensions and annotations to data), separate code and data spaces (messing up access to constant data), and poor pointer support, poor support for stack frames. The Z80 does not suffer from any of these, though some arithmetic is slow on 16-bit data.

Often C compilers for 8-bit devices choose to make their tools non-standard in an attempt to make it easier for programmers to write code for these chips. Some of the things I have seen are making "int"

8-bit, keeping 16-bit "int" but disregarding the integer promotion rules, and making "const" mean "in flash". These things are, IMHO, worse than forcing the user to use extensions to get efficient results. (It's better to have inefficient but correct results than efficient wrong results.)

Yes, but the AVR designers made a few big mistakes in the design that greatly reduce the HLL-friendliness.

You can certainly program the AVR in C++ quite happily. But the picture is not quite as rosy as you suggest - there are many situations where gcc produces noticeably sub-optimal code (and it does so more for C++ than for C, even when the code is valid for both languages and has the same meaning in both languages).

Reply to
David Brown

ST has a longevity program

formatting link

That does however not save one from product allocation as seen with F1 parts now.

Bye

--
Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de 

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt 
--------- Tel. 06151 1623569 ------- Fax. 06151 1623305 ---------
Reply to
Uwe Bonnes

The real advantage of selecting cheap parts is that it's reasonable to buy a few reels, a product lifetime supply. The bummer is that some parts go EOL with little or no notice.

Reply to
John Larkin

This prompted me to take a look at the SAM10D series and the on-chip ADC. I happen to have an eval board that I picked up at a conference somewhere with the SAM20D which has a similar ADC on it. Tried loading the ADC example from Atmel/Microchip Studio. Compiles without errors or warnings (not always true) and is displaying an 8-bit hex result over the virtual com port. So I apply a known voltage to the ADC. Hmm.. number seems off, maybe 8%, which is quite a bit.

Dig into the code, and they're collecting 31 results in an array and averaging 32 with the last one zero from power-up initialization. Dumb off-by-one error. Okay.. fix that. But still when the input is saturated I still get 0xee, not 0xff.

Turns out they are doing this to present a printable ASCII value:

sensor_value[0] = hex[(average_adc_value & 0xf0) >> 4]; sensor_value[1] = hex[average_adc_value & 0x0f];

With the definition: static const uint8_t hex[] = "01234567890abcdef";

!!!!!

Fix that idiocy and the calibration is within a lsb.

Two serious errors in one simple example.. cripes. Are they using drug-addled interns to write this stuff?

Best regards, Spehro Pefhany

Reply to
speff

Sorry to say, but it is the standard quality of software from the chip vendors. That's why I do not use e.g. the CMSIS library.

--

-TV
Reply to
Tauno Voipio

I love it when I get to run my fingernail across the digits row :P

Reply to
Clifford Heath

PIC10F320 - can't get much tinier than a SOT23. PPIC10F320 has ADC and onchip temp sensor and costs a few tens of cents. ATtiny5/10 is similar but lacks the onchip temp sensor.

piglet

Reply to
piglet

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.