large microprocessors?

I basically agree with you. The processor we looked at are mostly used in consumer products and small scale process control systems.

w..

Reply to
Walter Banks
Loading thread data ...

I think the reason comes to economics of scale for production. The desk top cpu has much more area devoted to building the processor, so adding the additional ram for the cache has economic viability. These processors use more expensive chip technology to fit all this on the chip, but the processors have value worth the expense.

For the lower end processors, the technology doesn't really allow for that large of a memory array to keep within the value range of the processor. The fact that this also aligns with the memory needed for typical uses of these chips is a bonus that reduces the need to try and develop the exception processor.

The way to get a microcontroller with a somewhat larger memory space is to use an external memory chip (not a memory stick with multiple chips). SRAM would be a smaller/cheap choice than DRAM.

Reply to
Richard Damon

ow

at

d

on

32k

or 1

abytes

I can imagine how on small processors the figures can be consistent (though I find the assembly/C figures way too close to be supporting this, i.e. they are so close it looks things are as Jon suggests, one uses whatever is available). But on larger systems, where buffering may or may not be needed things can quickly change by some huge factor from application to application. A tcp/ip stack running at 100 MbpS (and actually using it for streaming lots of data) alone can eat up a few hundred kilobytes or even a few megabytes, for example, this just for inbound packet buffering.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
dp

It occurs to me also, most microcontrollers are mixed signal chips (a/d's and so forth). That might dictate some fab process decisions that don't play well with high density memory.

I'm pretty impressed with that STM part that Stephen Pelc mentioned. 1M flash, 192kb ram (I didn't see a 256kb version but 192kb is almost as good), tons of on-chip peripherals, and there's an ultra cheap development board for it. The next step up is external memory as you mentioned.

Reply to
Paul Rubin

Actually quite a bit less ram, and a lot of the ram they used actually held program code that would run from flash in the case of this STM part.

The chip has an SDIO controller. It's too bad that the Discovery evaluation board doesn't have an SD card socket. That would allow re-creating the PDP-11 Un*x experience ;-).

But, I think in reality I'd run a simple RTOS or a standalone application on this chip. The next step up would be a Linux board as those have also gotten quite inexpensive (various boards inspired by the Raspberry Pi). They just have more power drain and more software to deal with, and a bit less realtime capability.

Reply to
Paul Rubin

That's not particularly surprising - "general" code will use stack and local variables at a statistically fairly similar rate, so ram-to-rom ratio will be reasonably consistent. When moving to 32-bit, the rom usage (for the same program functionality) typically increases a little, but the ram usage increases by a factor of 2 to 4 (due to the wider integers). But with this correction factor, the ratio will again be reasonably consistent.

The exception to this is buffer space - for arrays of sample data, communication buffers, etc. This is particularly common in bigger micros, especially ones with high speed communication (USB or Ethernet).

Thus when manufacturers make a family of devices, they will typically have a range of ram/flash sizes for different uses, but have a similar ratio across the family. And a 32-bit family will have about 4-8 times the ram for the same flash size as an 8-bit family would do.

Statistically, this all makes economic sense. But as Jon says, it's a pain if that doesn't fit the task in hand.

Reply to
David Brown

For desktop yes, embedded no.

Hmm the Raspberry Pi uses a difficult to obtain Broadcom chip wuth stacked packages.

First version was 256MB with HALF of that by default assigned to graphics, yes 128MB to run Linus.

Second version has 512MB RAM package using SD cardd IO for disk.

These days you can reduce the graphics ram size significantly.

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk 
    PC Services 
 Click to see the full signature
Reply to
Paul

The 16% compiled code and 20% handwritten ram/rom ratio have a simple explanation. Compilers are better at re-using variable space than hand written code can reasonably do.

RAM re-use is an accounting problem something computers are good at. In HLL's it is redone for every compile.

w..

Reply to
Walter Banks

"Large microprocessors", isn't that like "jumbo shrimp"?

--

Rick
Reply to
rickman

Not really. Given the evolution of the term, a "micro" processor is really any device substantially smaller than your average dish washer.

I've never heard of a good reason the evolution of terms didn't continue further down to nano or pico processors --- but it didn't.

Reply to
Hans-Bernhard Bröker

NXP LPC3250. 256k on-board SRAM, and 32k each of I/D cache. It's an ARM9, about $7 each in reasonable quantities. Vendor support is...present...sort of, and the peripherals have some interesting personality quirks.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com 
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

Hmm, yes, I can see how this can be an important factor. Less for my code as my global variables are not very reusable and there is not much to be gained from variables on the stack perhaps, but I don't really know. Then my code for small 8 bit MCUs will likely not be an exception from what you have observed. Do you have a figure for the reuse ratio high/low level? I mean from the same research you did? Quite interesting.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
dp

Another possibility is that assembly language programmers are more likely than HLL programmers to use intricate code minimization schemes and avoid boilerplate libraries, function prologues, etc.; so the assembly programs may tend to have less code, while using about amount of ram storage.

Reply to
Paul Rubin

"Microprocessor" is usually taken to mean a CPU that fits on a single chip, although a few folks have fudged that a bit. As distinct from multi-chip designs. It's not clear what a "nano" or "pico" processor would be on that progression.

Reply to
Robert Wessel

That would change the ram/rom statistics. It is something that we looked at. There is a style factor in the HLL vs asm coding that accounts for some of the differences and that is many (not all) HLL's maintain local variables on the stack which makes access to these variables more expensive in terms of code size. This would skew the statistics some except we also found that compiled code was smaller than similar assembly applications. (I don't want to start another asm vs HLL thread)

In the end the Hll/assembler ram/rom ratios were essentially due to one factor ram accounting in a HLL.

w..

.
Reply to
Walter Banks

The standard deviation was around 1% and both were essentially normally distributed.

w..

Reply to
Walter Banks

Freescale Kinetis X claims to have up to 4 MB FLASH and 512 kB SRAM.

-jm

Reply to
Jukka Marin

Some TI Hercules and some Freescale have 256K of SRAM, and 337 or 516 "pins". Not cheap.

NXP 43xx with 136K SRAM is not a bad choice. Easy interface to single chip 32MB external SDRAM, pretty low total cost (probably 5-10 in qty).

Reply to
Spehro Pefhany

I think he really means "large microcontrollers", since he wants all the memory on-chip (or at least in one package).

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward" 
speff@interlog.com             Info for manufacturers: http://www.trexon.com 
 Click to see the full signature
Reply to
Spehro Pefhany

Thanks, this is kind of interesting. It was announced in 2011 but doesn't appear to have actually shipped. There is some marketing stuff for it on freescale.com but other stuff seems to have been taken down. I wonder if the chip was cancelled.

Anyway I think I understand the general picture by now, and the 192KB STM part is probably the best answer for the stuff I'm thinking of, mostly because of the low cost development boards (Discovery and Olimex). If 192kb isn't enough, 512kb probably isn't enough either. Next step past that would involve off-chip DRAM.

Reply to
Paul Rubin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.