64-bit embedded computing is here and now

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View

Sometimes things move faster than expected.
As someone with an embedded background this caught me by surprise:

Tera-Byte microSD cards are readily available and getting cheaper.
Heck, you can carry ten of them in a credit card pouch.
Likely to move to the same price range as hard disks ($20/TB).

That means that a 2+ square inch PCB can hold a 64-bit processor and enough storage for memory mapped files larger than 4GB.

Is the 32-bit embedded processor cost vulnerable to 64-bit 7nm devices as the FABs mature? Will video data move to the IOT edge? Will AI move to the edge?  Will every embedded CPU have a built-in radio?

Wait a few years and find out.

Re: 64-bit embedded computing is here and now
On 6/7/2021 7:47 AM, James Brakefield wrote:
Quoted text here. Click to load it

Kind of old news.  I've been developing on a SAMA5D36 platform with 256M of
FLASH and 256M of DDR2 for 5 or 6 years, now.  PCB is just over 2 sq in
(but most of that being off-board connectors).  Granted, it's a 32b processor
but I'll be upgrading that to something "wider" before release (software and
OS have been written for a 64b world -- previously waiting for costs to fall
to make it as economical as the 32b was years ago; now waiting to see if I
can leverage even MORE hardware-per-dollar!).

Once you have any sort of connectivity, it becomes practical to support
files larger than your physical memory -- just fault the appropriate
page in over whatever interface(s) you have available (assuming you
have other boxes that you can talk to/with)

Quoted text here. Click to load it

In my case, video is already *at* the edge.  The idea of needing a
"bigger host" or "the cloud" is already obsolescent.  Even the need
for bulk storage -- whether on-board (removable flash, as you suggest)
or remotely served -- is dubious.  How much persistent store do you
really need, beyond your executables, in a typical application?

I've decided that RAM is the bottleneck as you can't XIP out of
an SD card...

Radios?  <shrug>  Possibly as wireless is *so* much easier to
interconnect than wired.  But, you're still left with the power
problem; even at a couple of watts, wall warts are unsightly
and low voltage DC isn't readily available *everywhere* that
you may want to site a device.  (how many devices do you
want tethered to a USB host before it starts to look a mess?)

The bigger challenge is moving developers to think in terms of
the capabilities that the hardware will afford.  E.g., can
you exploit *true* concurrency in your application?  Or, will
you "waste" a second core/thread context on some largely
decoupled activity?  How much capability will you be willing
to sacrifice to your hosting OS -- and what NEW capabilities
will it 0provide you?

Quoted text here. Click to load it

The wait won't even be *that* long...

Re: 64-bit embedded computing is here and now
Quoted text here. Click to load it

I don't care what the people say--
32 bits are here to stay.

Re: 64-bit embedded computing is here and now
On 08/06/2021 07:31, Paul Rubin wrote:
Quoted text here. Click to load it

8-bit microcontrollers are still far more common than 32-bit devices in
the embedded world (and 4-bit devices are not gone yet).  At the other
end, 64-bit devices have been used for a decade or two in some kinds of
embedded systems.

We'll see 64-bit take a greater proportion of the embedded systems that
demand high throughput or processing power (network devices, hard cores
in expensive FPGAs, etc.) where the extra cost in dollars, power,
complexity, board design are not a problem.  They will probably become
more common in embedded Linux systems as the core itself is not usually
the biggest part of the cost.  And such systems are definitely on the

But for microcontrollers - which dominate embedded systems - there has
been a lot to gain by going from 8-bit and 16-bit to 32-bit for little
cost.  There is almost nothing to gain from a move to 64-bit, but the
cost would be a good deal higher.  So it is not going to happen - at
least not more than a very small and very gradual change.

The OP sounds more like a salesman than someone who actually works with
embedded development in reality.

Re: 64-bit embedded computing is here and now
On 6/7/2021 10:59 PM, David Brown wrote:
Quoted text here. Click to load it

I contend that a good many "32b" implementations are really glorified
8/16b applications that exhausted their memory space.  I still see lots
of designs build on a small platform (8/16b) and augment it -- either
with some "memory enhancement" technology or additional "slave"
processors to split the binaries.  Code increases in complexity but
there doesn't seem to be a need for the "work-per-unit-time" to.

[This has actually been the case for a long time.  The appeal of
newer CPUs is often in the set of peripherals that accompany the
processor, not the processor itself.]

Quoted text here. Click to load it

I disagree.  The "cost" (barrier) that I see clients facing is the
added complexity of a 32b platform and how it often implies (or even
*requires*) a more formal OS underpinning the application.  Where you
could hack together something on bare metal in the 8/16b worlds,
moving to 32 often requires additional complexity in managing
mechanisms that aren't usually present in smaller CPUs (caches,
MMU/MPU, DMA, etc.)  Developers (and their organizations) can't just
play "coder cowboy" and coerce the hardware to behaving as they
would like.  Existing staff (hired with the "bare metal" mindset)
are often not equipped to move into a more structured environment.

[I can hack together a device to meet some particular purpose
much easier on "development hardware" than I can on a "PC" -- simply
because there's too much I have to "work around" on a PC that isn't
present on development hardware.]

Not every product needs a filesystem, network stack, protected
execution domains, etc.  Those come with additional costs -- often
in the form of a lack of understanding as to what the ACTUAL
code in your product is doing at any given time.  (this isn't the
case in the smaller MCU world; it's possible for a developer to
have written EVERY line of code in a smaller platform)

Quoted text here. Click to load it

Why is the cost "a good deal higher"?  Code/data footprints don't
uniformly "double" in size.  The CPU doesn't slow down to handle
bigger data.

The cost is driven by where the market goes.  Note how many 68Ks found
design-ins vs. the T11, F11, 16032, etc.  My first 32b design was
physically large, consumed a boatload of power and ran at only a modest
improvement (in terms of system clock) over 8b processors of its day.
Now, I can buy two orders of magnitude more horsepower PLUS a
bunch of built-in peripherals for two cups of coffee (at QTY 1)

Quoted text here. Click to load it

We got 32b processors NOT because the embedded world cried out for
them but, rather, because of the influence of the 32b desktop world.
We've had 32b processors since the early 80's.  But, we've only had
PCs since about the same timeframe!  One assumes ubiquity in the
desktop world would need to happen before any real spillover to embedded.
(When the "desktop" was an '11 sitting in a back room, it wasn't seen
as ubiquitous.)

In the future, we'll see the 64b *phone* world drive the evolution
of embedded designs, similarly.  (do you really need 32b/64b to
make a phone?  how much code is actually executing at any given
time and in how many different containers?)

[The OP suggests MCus with radios -- maybe they'll be cell phone
radios and *not* wifi/BLE as I assume he's thinking!  Why add the
need for some sort of access point to a product's deployment if
the product *itself* can make a direct connection??]

My current design can't fill a 32b address space (but, that's because
I've decomposed apps to the point that they can be relatively small).
OTOH, designing a system with a 32b limitation seems like an invitation
to do it over when 64b is "cost effective".  The extra "baggage" has
proven to be relatively insignificant (I have ports of my codebase
to SPARC as well as Atom running alongside a 32b ARM)

Quoted text here. Click to load it

Possibly.  Or, just someone that wanted to stir up discussion...

Re: 64-bit embedded computing is here and now
On 08/06/2021 09:39, Don Y wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

Sure.  Previously you might have used 32 kB flash on an 8-bit device,
now you can use 64 kB flash on a 32-bit device.  The point is, you are
/not/ going to find yourself hitting GB limits any time soon.  The step
from 8-bit or 16-bit to 32-bit is useful to get a bit more out of the
system - the step from 32-bit to 64-bit is totally pointless for 99.99%
of embedded systems.  (Even for most embedded Linux systems, you usually
only have a 64-bit cpu because you want bigger and faster, not because
of memory limitations.  It is only when you have a big gui with fast
graphics that 32-bit address space becomes a limitation.)

A 32-bit microcontroller is simply much easier to work with than an
8-bit or 16-bit with "extended" or banked memory to get beyond 64 K
address space limits.

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Yes, that is definitely a cost in some cases - 32-bit microcontrollers
are usually noticeably more complicated than 8-bit ones.  How
significant the cost is depends on the balances of the project between
development costs and production costs, and how beneficial the extra
functionality can be (like moving from bare metal to RTOS, or supporting

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Some parts of code and data /do/ double in size - but not uniformly, of
course.  But your chip is bigger, faster, requires more power, has wider
buses, needs more advanced memories, has more balls on the package,
requires finer pitched pcb layouts, etc.

In theory, you /could/ make a microcontroller in a 64-pin LQFP and
replace the 72 MHz Cortex-M4 with a 64-bit ARM core at the same clock
speed.  The die would only cost two or three times more, and take
perhaps less than 10 times the power for the core.  But it would be so
utterly pointless that no manufacturer would make such a device.

So a move to 64-bit in practice means moving from a small, cheap,
self-contained microcontroller to an embedded PC.  Lots of new
possibilities, lots of new costs of all kinds.

Oh, and the cpu /could/ be slower for some tasks - bigger cpus that are
optimised for throughput often have poorer latency and more jitter for
interrupts and other time-critical features.

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

I don't assume there is any direct connection between the desktop world
and the embedded world - the needs are usually very different.  There is
a small overlap in the area of embedded devices with good networking and
a gui, where similarity to the desktop world is useful.

We have had 32-bit microcontrollers for decades.  I used a 16-bit
Windows system when working with my first 32-bit microcontroller.  But
at that time, 32-bit microcontrollers cost a lot more and required more
from the board (external memories, more power, etc.) than 8-bit or
16-bit devices.  That has gradually changed with an almost total
disregard for what has happened in the desktop world.

Yes, the embedded world /did/ cry out for 32-bit microcontrollers for an
increasing proportion of tasks.  We cried many tears when then
microcontroller manufacturers offered to give more flash space to their
8-bit devices by having different memory models, banking, far jumps, and
all the other shit that goes with not having a big enough address space.
 We cried out when we wanted to have Ethernet and the microcontroller
only had a few KB of ram.  I have used maybe 6 or 8 different 32-bit
microcontroller processor architectures, and I used them because I
needed them for the task.  It's only in the past 5+ years that I have
been using 32-bit microcontrollers for tasks that could be done fine
with 8-bit devices, but the 32-bit devices are smaller, cheaper and
easier to work with than the corresponding 8-bit parts.

Quoted text here. Click to load it

Quoted text here. Click to load it

We will see that on devices that are, roughly speaking, tablets -
embedded systems with a good gui, a touchscreen, networking.  And that's
fine.  But these are a tiny proportion of the embedded devices made.

Quoted text here. Click to load it

Quoted text here. Click to load it

Could be.  And there's no harm in that!

Re: 64-bit embedded computing is here and now
On 6/8/2021 4:04 AM, David Brown wrote:
Quoted text here. Click to load it

I don't see the "problem" with 32b devices as one of address space limits
(except devices utilizing VMM with insanely large page sizes).  As I said,
in my application, task address spaces are really just a handful of pages.

I *do* see (flat) address spaces that find themselves filling up with
stack-and-heap-per-task, big chunks set aside for "onboard" I/Os,
*partial* address decoding for offboard I/Os, etc.  (i.e., you're
not likely going to fully decode a single address to access a set
of DIP switches as the decode logic is disproportionately high
relative to the functionality it adds)

How often do you see a high-order address line used for kernel/user?
(gee, now your "user" space has been halved)

Quoted text here. Click to load it

You're assuming there has to be some "capacity" value to the 64b move.

You might discover that the ultralow power devices (for phones!)
are being offered in the process geometries targeted for the 64b
devices.  Or, that some integrated peripheral "makes sense" for
phones (but not MCUs targeting motor control applications).  Or,
that there are additional power management strategies supported
in the hardware.

In my mind, the distinction brought about by "32b" was more advanced
memory protection/management -- even if not used in a particular
application.  You simply didn't see these sorts of mechanisms
in 8/16b offerings.  Likewise, floating point accelerators.  Working
in smaller processors meant you had to spend extra effort to
bullet-proof your code, economize on math operators, etc.

So, if you wanted the advantages of those (hardware) mechanisms,
you "upgraded" your design to 32b -- even if it didn't need
gobs of address space or generic MIPS.  It just wasn't economical
to bolt on an AM9511 or practical to build a homebrew MMU.

Quoted text here. Click to load it

There have been some 8b processors that could seemlessly (in HLL)
handle extended address spaces.  The Z180s were delightfully easy
to use, thusly.  You just had to keep in mind that a "call" to
a different bank was more expensive than a "local" call (though
there were no syntactic differences; the linkage editor and runtime
package made this invisible to the developer).

We were selling products with 128K of DRAM on Z80's back in 1981.
Because it was easier to design THAT hardware than to step up to
a 68K, for example.  (as well as leveraging our existing codebase)
The "video game era" was built on hybridized 8b systems -- even though
you could buy 32b hardware, at the time.  You would be surprised at
the ingenuity of many of those systems in offloading the processor
of costly (time consuming) operations to make the device appear more
powerful than it actually was.

Quoted text here. Click to load it

I see most 32b designs operating without the benefits that a VMM system
can apply (even if you discount demand paging).  They just want to have
a big address space and not have to dick with "segment registers", etc.
They plow through the learning effort required to configure the device
to move the "extra capabilities" out of the way.  Then, just treat it
like a bigger 8/16 processor.

You can "bolt on" a simple network stack even with a rudimentary RTOS/MTOS.
Likewise, a web server.  Now, you remove the need for graphics and other UI
activities hosted *in* the device.  And, you likely don't need to support
multiple concurrent clients.  If you want to provide those capabilities, do
that *outside* the device (let it be someone else's problem).  And, you gain
"remote access" for free.

Few such devices *need* (or even WANT!) ARP caches, inetd, high performance
stack, file systems, etc.

Given the obvious (coming) push for enhanced security in devices, anything
running on your box that you don't need (or UNDERSTAND!) is likely going to
be pruned off as a way to reduce the attack surface.  "Why is this port open?
What is this process doing?  How robust is the XXX subsystem implementation
to hostile actors in an *unsupervised* setting?"

Quoted text here. Click to load it

And has been targeted to a market that is EXTREMELY power sensitive

It is increasingly common for manufacturing technologies to be moving away
from "casual development".  The days of owning your own wave and doing
in-house manufacturing at a small startup are gone.  If you want to
limit yourself to the kinds of products that you CAN (easily) assemble, you
will find yourself operating with a much poorer selection of components
available.  I could fab a PCB in-house and build small runs of prototypes
using the wave and shake-and-bake facilities that we had on hand.  Harder
to do so, nowadays.

This has always been the case.  When thru-hole met SMT, folks had to
either retool to support SMT, or limit themselves to components that
were available in thru-hole packages.  As the trend has always been
for MORE devices to move to newer packaging technologies, anyone
who spent any time thinking about it could read the writing on the wall!
(I bought my Leister in 1988?  Now, I prefer begging favors from
colleagues to get my prototypes assembled!)

I suspect this is why we now see designs built on COTS "modules"
increasingly.  Just like designs using wall warts (so they don't
have to do the testing on their own, internally designed supplies).
It's one of the reasons FOSH is hampered (unlike FOSS, you can't roll
your own copy of a hardware design!)

Quoted text here. Click to load it

This is specious reasoning:  "You could take the die out of a 68K and
replace it with a 64 bit ARM."  Would THAT core cost two or three times more
(do you recall how BIG 68K die were?) and consume 10 times the power?
(it would consume considerably LESS).

The market will drive the cost (power, size, $$$, etc.) of 64b cores
down as they will find increasing use in devices that are size and
power constrained.  There's far more incentive to make a cheap,
low power 64b ARM than there is to make a cheap, low power i686
(or 68K) -- you don't see x86 devices in phones (laptops have bigger
power budgets so less pressure on efficiency).

There's no incentive to making thru-hole versions of any "serious"
processor, today.  Just like you can't find any fabs for DTL devices.
Or 10 & 12" vinyl.  (yeah, you can buy vinyl, today -- at a premium.
And, I suspect you can find someone to package an ARM on a DIP
carrier.  But, each of those are niche markets, not where the
"money lies")

Quoted text here. Click to load it

How do you come to that conclusion?  I have a 32b MCU on a board.
And some FLASH and DRAM.  How is that going to change when I
move to a 64b processor?  The 64b devices are also SoCs so
it's not like you suddenly have to add address decoding logic,
a clock generator, interrupt controller, etc.

Will phones suddenly become FATTER to accommodate the extra
hardware needed?  Will they all need bolt on battery boosters?

Quoted text here. Click to load it

You're cherry picking.  They can also be FASTER for other tasks
and likely will be optimized to justify/exploit those added abilities;
a vendor isn't going to offer a product that is LESS desireable than
his existing products.  An IPv6 stack on a 64b processor is a bit
easier to implement than on 32b.

(remember, ARM is in a LOT of fabs!  That speaks to how ubiquitous
it is!)

Quoted text here. Click to load it

The desktop world inspires the embedded world.  You see what CAN be done
for "reasonable money".

In the 70's, we put i4004's into products because we knew the processing
that was required was "affordable" (at several kilobucks) -- because
we had our own '11 on site.  We leveraged the in-house '11 to compute
"initialization constants" for the needs of specific users (operating
the i4004-based products).  We didn't hesitate to migrate to i8080/85
when they became available -- because the price point was largely
unchanged (from where it had been with the i4004) AND we could skip the
involvement of the '11 in computing those initialization constants!

I watch the prices of the original 32b ARM I chose fall and see that
as an opportunity -- to UPGRADE the capabilities (and future-safeness
of the design).  If I'd assumed $X was a tolerable price, before,
then it likely still is!

Quoted text here. Click to load it

I disagree.  I recall having to put lots of "peripherals" into
an 8/16b system, external address decoding logic, clock generators,
DRAM controllers, etc.

And, the cost of entry was considerably higher.  Development systems
used to cost tens of kilodollars (Intellec MDS, Zilog ZRDS, Moto
EXORmacs, etc.)  I shared a development system with several other
developers in the 70's -- because the idea of giving each of us our
own was anathema, at the time.

For 35+ years, you could put one on YOUR desk for a few kilobucks.
Now, it's considerably less than that.

You'd have to be blind to NOT think that the components that
are "embedded" in products haven't -- and won't continue -- to
see similar reductions in price and increases in performance.

Do you think the folks making the components didn't anticipate
the potential demand for smaller/faster/cheaper chips?

We've had TCP/IP for decades.  Why is it "suddenly" more ubiquitous
in product offerings?  People *see* what they can do with a technology
in one application domain (e.g., desktop) and extrapolate that to
other, similar application domains (embedded).

I did my first full custom 30+ years ago.  Now, I can buy an off-the-shelf
component and "program" it to get similar functionality (without
involving a service bureau).  Ideas that previously were "gee, if only..."
are now commonplace.

Quoted text here. Click to load it

But that's because your needs evolve and the tools you choose to
use have, as well.

I wanted to build a little line frequency clock to see how well it
could discipline my NTPd.  I've got all these PCs, single board PCs,
etc. lying around.  It was *easier* to hack together a small 8b
processor to do the job -- less hardware to understand, no OS
to get in the way, really simple to put a number on the interrupt
latency that I could expect, no uncertainties about the hardware
that's on the PC, etc.

OTOH, I have a network stack that I wrote for the Z180 decades
ago.  Despite being written in a HLL, it is a bear to deploy and
maintain owing to the tools and resources available in that
platform.  My 32b stack was a piece of cake to write, by comparison!

Quoted text here. Click to load it

Again, I disagree.  You've already admitted to using 32b processors
where 8b could suffice.  What makes you think you won't be using 64b
processors when 32b could suffice?

It's just as hard for me to prototype a 64b SoC as it is a 32b SoC.
The boards are essentially the same size.  "System" power consumption
is almost identical.  Cost is the sole differentiating factor, today.
History tells us it will be less so, tomorrow.  And, the innovations
that will likely come in that offering will likely exceed the
capabilities (or perceived market needs) of smaller processors.
To say nothing of the *imagined* uses that future developers will

I can make a camera that "reports to google/amazon" to do motion detection,
remote access, etc.  Or, for virtually the same (customer) dollars, I
can provide that functionality locally.  Would a customer want to add
an "unnecessary" dependency to a solution?  "Tired of being dependant
on Big Brother for your home security needs? ..."  Imagine a 64b SoC
with a cellular radio:  "I'll *call* you when someone comes to the door..."
(or SMS)

I have cameras INSIDE my garage that assist with my parking and
tell me if I've forgotten to close the garage door.  Should I have
google/amazon perform those value-added tasks for me?  Will they
tell me if I've left something in the car's path before I run over it?
Will they turn on the light to make it easier for me to see?
Should I, instead, tether all of those cameras to some "big box"
that does all of that signal processing?  What happens to those
resources when the garage is "empty"??

The "electric eye" (interrupter) that guards against closing the
garage door on a toddler/pet/item in it's path does nothing to
protect me if I leave some portion of the vehicle in the path of
the door (but ABOVE the detection range of the interrupter).
Locating a *camera* on teh side of the doorway lets me detect
if ANYTHING is in the path of the door, regardless of how high
above the old interrupter's position it may be located.

How *many* camera interfaces should the SoC *directly* support?

The number (and type) of applications that can be addressed with
ADDITIONAL *local* smarts/resources is almost boundless.  And, folks
don't have to wait for a cloud supplier (off-site processing) to
decide to offer them.

"Build it and they will come."

[Does your thermostat REALLY need all of that horsepower -- two
processors! -- AND google's server in order to control the HVAC
in your home?  My god, how did that simple bimetallic strip
ever do it??!]

If you move into the commercial/industrial domains, the opportunities
are even more diverse!  (e.g., build a camera that does component inspection
*in* the camera and interfaces to a go/nogo gate or labeller)

Note that none of these applications need a display, touch panel, etc.
What they likely need is low power, small size, connectivity, MIPS and
memory.  The same sorts of things that are common in phones.

Quoted text here. Click to load it

On that, we agree.

Time for ice cream (easiest -- and most enjoyable -- way to lose weight)!

Re: 64-bit embedded computing is here and now
On 09/06/2021 02:30, Don Y wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

32 bit address space is not typically a problem or limitation.

(One other use of 64-bit address space is for debug tools like valgrind
or "sanitizers" that use large address spaces along with MMU protection
and specialised memory allocation to help catch memory errors.  But
these also need sophisticated MMU's and a lot of other resources not
often found on small embedded systems.)

Quoted text here. Click to load it

Quoted text here. Click to load it

Unless you are talking about embedded Linux and particularly demanding
(or inefficient!) tasks, halving your address space is not going to be a

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

I'm trying to establish if there is any value at all in moving to
64-bit.  And I have no doubt that for the /great/ majority of embedded
systems, it would not.

I don't even see it as having noticeable added value in the solid
majority of embedded Linux systems produced.  But in those systems, the
cost is minor or irrelevant once you have a big enough processor.

Quoted text here. Click to load it

Process geometries are not targeted at 64-bit.  They are targeted at
smaller, faster and lower dynamic power.  In order to produce such a big
design as a 64-bit cpu, you'll aim for a minimum level of process
sophistication - but that same process can be used for twice as many
32-bit cores, or bigger sram, or graphics accelerators, or whatever else
suits the needs of the device.

A major reason you see 64-bit cores in big SOC's is that the die space
is primarily taken up by caches, graphics units, on-board ram,
networking, interfaces, and everything else.  Moving the cpu core from
32-bit to 64-bit only increases the die size by a few percent, and for
some tasks it will also increase the the performance of the code by a
small but helpful amount.  So it is not uncommon, even if you don't need
the additional address space.

(The other major reason is that for some systems, you want to work with
more than about 2 GB ram, and then life is much easier with 64-bit cores.)

On microcontrollers - say, a random Cortex-M4 or M7 device - changing to
a 64-bit core will increase the die by maybe 30% and give roughly /zero/
performance increase.  You don't use 64-bit unless you really need it.

Quoted text here. Click to load it

Quoted text here. Click to load it

You need to write correct code regardless of the size of the device.  I
disagree entirely about memory protection being useful there.  This is
comp.arch.embedded, not comp.programs.windows (or whatever).  An MPU
might make it easier to catch and fix bugs while developing and testing,
but code that hits MPU traps should not leave your workbench.

But you are absolutely right about maths (floating point or integer) -
having 32-bit gives you a lot more freedom and less messing around with
scaling back and forth to make things fit and work efficiently in 8-bit
or 16-bit.  And if you have floating point hardware (and know how to use
it properly), that opens up new possibilities.

64-bit cores will extend that, but the step is almost negligable in
comparison.  It would be wrong to say "int32_t is enough for anyone",
but it is /almost/ true.  It is certainly true enough that it is not a
problem that using "int64_t" takes two instructions instead of one.

Quoted text here. Click to load it

Quoted text here. Click to load it

A phone cpu takes orders of magnitude more power to do the kinds of
tasks that might be typical for a microcontroller cpu - reading sensors,

are optimised for doing the "big phone stuff" efficiently - because
that's what takes the time, and therefore the power.

(I'm snipping because there is far too much here - I have read your
comments, but I'm trying to limit the ones I reply to.)

Quoted text here. Click to load it

Quoted text here. Click to load it

I assume you are disagreeing about seeing 64-bit cpus only on devices
that need a lot of memory or processing power, rather than disagreeing
that such devices are only a tiny proportion of embedded devices.

Quoted text here. Click to load it

Quoted text here. Click to load it

As I have said, I think there will be an increase in the proportion of
64-bit embedded devices - but I think it will be very slow and gradual.
 Perhaps in 20 years time 64-bit will be in the place that 32-bit is
now.  But it won't happen for a long time.

Why do I use 32-bit microcontrollers where an 8-bit one could do the
job?  Well, we mentioned above that you can be freer with the maths.
You can, in general, be freer in the code - and you can use better tools
and languages.  With ARM microcontrollers I can use the latest gcc and
C++ standards - I don't have to program in a weird almost-C dialect
using extensions to get data in flash, or pay thousands for a limited
C++ compiler with last century's standards.  I don't have to try and
squeeze things into 8-bit scaled integers, or limit my use of pointers
due to cpu limitations.

And manufacturers make the devices smaller, cheaper, lower power and
faster than 8-bit devices in many cases.

If manufactures made 64-bit devices that are smaller, cheaper and lower
power than the 32-bit ones today, I'd use them.  But they would not be
better for the job, or better to work with and better for development in
the way 32-bit devices are better than 8-bit and 16-bit.

Quoted text here. Click to load it

For you, perhaps.  Not necessarily for others.

We design, program and manufacture electronics.  Production and testing
of simpler cards is cheaper.  The pcbs are cheaper.  The chips are
cheaper.  The mounting is faster.  The programming and testing is
faster.  You don't mix big, thick tracks and high power on the same
board as tight-packed BGA with blind/buried vias - but you /can/ happily
work with less dense packages on the same board.

If you are talking about replacing one 400-ball SOC with another
400-ball SOC with a 64-bit core instead of a 32-bit core, then it will
make no difference in manufacturing.  But if you are talking about
replacing a Cortex-M4 microcontroller with a Cortex-A53 SOC, it /will/
be a lot more expensive in most volumes.

I can't really tell what kinds of designs you are discussing here.  When
I talk about embedded systems in general, I mean microcontrollers
running specific programs - not general-purpose computers in embedded
formats (such as phones).

(For very small volumes, the actual physical production costs are a
small proportion of the price, and for very large volumes you have
dedicated machines for the particular board.)

Quoted text here. Click to load it

Quoted text here. Click to load it

I've not heard of that as a dieting method, but I shall give it a try :-)

Re: 64-bit embedded computing is here and now
On 6/9/2021 12:17 AM, David Brown wrote:

Quoted text here. Click to load it

That;s a no-brainer -- most embedded systems are small MCUs.
Consider the PC I'm sitting at has an MCU in the keyboard;
another in the mouse; one in the optical disk drive; one in
the rust disk drive; one in the printer; two in the UPS;
one in the wireless "modem"; one in the router; one in
the thumb drive; etc.  All offsetting the "big" CPU in
the computer, itself.

Quoted text here. Click to load it

My point is that the market can distort the "price/value"
relationship in ways that might not, otherwise, make sense.
A "better" device may end up costing less than a "worse"
device -- simply because of the volumes that the population
of customers favor.

Quoted text here. Click to load it

They will apply newer process geometries to newer devices.
No one is going to retool an existing design -- unless doing
so will result in a significant market enhancement.

Why don't we have 100MHz MC6800's?

Quoted text here. Click to load it

Again, "... unless the market has made those devices cheaper than
their previous choices"  People don't necessarily "fit" their
applications to the devices they choose; they consider other
factors (cost, package type, availability, etc.) in deciding
what to actual design into the product.

You might "need" X MB of RAM but will "tolerate" 4X -- if the
price is better than for the X MB *or* the X MB devices are
not available.  If the PCB layout can directly accommodate
such a solution, then great!  But, even if not, a PCB
revision is a cheap expenditure if it lets you take advantage of
a different component.

I've made very deliberate efforts NOT to use many of the
"I/Os" on the MCUs that I'm designing around so I can
have more leeway in making that selection when released
to production (every capability used represents a
constraint that OTHER selections must satisfy)

Quoted text here. Click to load it

You're assuming you (or I) have control over all of the code that
executes on a product/platform.  And, that every potential bug
manifests *in* testing.  (If that were the case, we'd never
see bugs in the wild!)

In my case, "third parties" (who the hell is the SECOND party??)
can install code that I've no control over.  That code could
be buggy -- or malevolent.  Being able to isolate "actors"
from each other means the OS can detect "can't happens"
at run time and shut down the offender -- instead of letting
it corrupt some part of the system.

Quoted text here. Click to load it

Except that int64_t can take *four* instead of one (add/sub/mul two
int64_t's with 32b hardware).

Quoted text here. Click to load it

Quoted text here. Click to load it

But you're making assumptions about what the "embedded microcontroller"
will actually be called upon to do!

Most of my embedded devices have "done more" than the PCs on which
they were designed -- despite the fact that the PC can defrost bagels!

Quoted text here. Click to load it

I'm disagreeing with the assumption that 64bit CPUs are solely used
on "tablets, devices with good GUIs, touchscreens, networking"
(in the embedded domain).

Quoted text here. Click to load it

And how is that any different from 32b processors introduced in 1980
only NOW seeing any sort of "widespread" use?

The adoption of new technologies accelerates, over time.  People
(not "everyone") are more willing to try new things -- esp if
it is relatively easy to do so.  I can buy a 64b evaluation kit
for a few hundred dollars -- I paid more than that for my first
8" floppy drive.  I can run/install some demo software and
get a feel for the level of performance, how much power
is consumed, etc.  I don't need to convince my employer to
make that investment (so *I* can explore).

In a group environment, if such a solution is *suggested*,
I can then lend my support -- instead of shying away out of
fear of the unknown risks.

Quoted text here. Click to load it

Exactly.  It's "easier" and you're less concerned with sorting
out (later) what might not fit or be fast enough, etc.

I could have done my current project with a bunch of PICs
talking to a "big machine" over EIA485 links (I'd done an
industrial automation project like that, before).  But,
unless you can predict how many sensors/actuators ("motes")
there will EVER be, it's hard to determine how "big" that
computer needs to be!

Given that the cost of the PIC is only partially reflective
of the cost of the DEPLOYED mote (run cable, attach and
calibrate sensors/actuators, etc.) the added cost of
moving to a bigger device on that mote disappears.
Especially when you consider the flexibility it affords
(in terms of scaling)

Quoted text here. Click to load it

Again, you're making predictions about what those devices will be.

Imagine 64b devices ARE equipped with radios. You can ADD a radio
to your "better suited" 32b design.  Or, *buy* the radio already
integrated into the 64b solution.  Are you going to stick with
32b devices because they are "better suited" to the application?
Or, will you "suffer" the pains of embracing the 64b device?

It's not *just* a CPU core that you're dealing with.  Just like
the 8/16 vs 32b decision isn't JUST about the width of the registers
in the device or size of the address space.

I mentioned my little experimental LFC device to discipline my
NTPd.  It would have been *nice* if it had an 8P8C onboard
so I could talk to it "over the wire".  But, that's not the
appropriate sort of connectivity for an 8b device -- a serial
port is.  If I didn't have a means of connecting to it thusly,
the 8b solution -- despite being a TINY development effort -- would
have been impractical; bolting on a network stack and NIC would
greatly magnify the cost (development time) of that platform.

Quoted text here. Click to load it

I cite phones as an example of a "big market" that will severely
impact the devices (MCUs) that are actually manufactured and sold.

I increasingly see "applications" growing in complexity -- beyond
"single use" devices in the past.  Devices talk to more things
(devices) than they had, previously.  Interfaces grow in
complexity (markets often want to exercise some sort of control
or configuration over a device -- remotely -- instead of just
letting it do its ONE thing).

In the past, additional functionality was an infrequent upgrade.
Now, designs accommodate it "in the field" -- because they
are expected to (no one wants to mail a device back to the factory
for a software upgrade -- or have a visit from a service tech
for that purpose).

Rarely does a product become LESS complex, with updates.  I've
often found myself updating a design only to discover I've
run out of some resource ("ROM", RAM, real-time, etc.).  This
never causes the update to be aborted; rather, it forces
an unexpected diversion into shoehorning the "new REQUIREMENTS"
into the old "5 pound sack".

In *my* case, there are fixed applications (MANY) running on
the hardware.  But, the system is designed to allow for
new applications to be added, old ones replaced (or retired),
augmented with additional hardware, etc.  It's not the "closed
unless updated" systems previously common.

We made LORAN-C position plotters, ages ago.  Conceptually,
cut a portion of a commercially available map and adhere it
to the plotter bed.  Position the pen at your current location
on the map.  Turn on.  Start driving ("sailing").  The pen
will move to indicate your NEW current position as well as
a track indicating your path TO that (from wherever you
were a moment ago).

[This uses 100% of an 8b processor's real-time to keep up
with the updates from the navigation receiver.]

"Gee, what if the user doesn't have a commercial map,
handy?  Can't we *draw* one for him?"

[Hmmm... if we concentrate on JUST drawing a map, then
we can spend 100% of the CPU on THAT activity!  We'll just
need to find some extra space to store the code required
and RAM to hold the variables we'll need...]

"Gee, when the fisherman drops a lobster pot over the
side, he has to run over to the plotter to mark the
current location -- so he can return to it at some later
date.  Why can't we give him a button (on a long cable)
that automatically draws an 'X' on the plot each time
he depresses it?"

You can see where this is going...

Devices grow in features and complexity.  If that plotter
was designed today, it would likely have a graphic display
(instead of pen and ink).  And the 'X' would want to be
displayed in RED (or, some user-configured color).  And
another color for the map to distinguish it from the "track".
And updates would want to be distributed via a phone
or thumbdrive or other "user accessible" medium.

This because the needs of such a device will undoubtedly
evolve.  How often have you updated the firmware in
your disk drives?  Optical drives?  Mice?  Keyboard?
Microwave oven?  TV?

We designed medical instruments where the firmware resided
in a big, bulky "module" that could easily be removed
(expensive ZIF connector!) -- so that medtechs could
perform the updates in minutes (instead of taking the device
out of service).  But, as long as we didn't overly tax the
real-time demands of the "base hardware", we were free
(subject to pricing issues) to enhance that "module" to
accommodate whatever new features were required.  The product
could "remain current".

Like adding RAM to a PC to extend its utility (why can't I add
RAM to my SmartTVs?  Why can't I update their codecs?)

The upgradeable products are designed for longer service lives
than the nonupgradable examples, here.  So, they have to be
able to accommodate (in their "base designs" a wider variety
of unforeseeable changes.

If you expect a short service life, then you can rationalize NOT
upgrading/updating and simply expecting the user to REPLACE the
device at some interval that your marketeers consider appropriate.

Quoted text here. Click to load it

It's not recommended.  I suspect it is evidence of some sort of
food allergy that causes my body not to process calories properly
(a tablespoon is 200+ calories; an enviable "scoop" is well over a
thousand!).  It annoys my other half to no end cuz she gains weight
just by LOOKING at the stuff!  :>  So, its best for me to "sneak"
it when she can't set eyes on it.  Or, for me to make flavors
that she's not keen on (this was butter pecan so she is REALLY

Re: 64-bit embedded computing is here and now

Quoted text here. Click to load it

A number of years ago somebody had a 200MHz 6502.  Granted, it was a
soft core implemented in an ASIC.

No idea what it was used for.

Quoted text here. Click to load it

A 32b CPU could require a dozen instructions to do 64b math depending
on whether it has condition flags, whether math ops set the condition
flags (vs requiring explicit compare or compare/branch), and whether
it even has carry aware ops [some chips don't]

If detecting wrap-around/overflow requires comparing the result
against the operands, multi-word arithmetic (even just 2 words)
quickly becomes long and messy.


Re: 64-bit embedded computing is here and now
Hi George,

On 6/12/2021 9:58 AM, George Neuner wrote:
Quoted text here. Click to load it

AFAICT, the military still uses them.  I know there was a radhard
8080 (or 8085?) made some years back.

I suspect it would just be a curiosity piece, though.  You'd need
< 10ns memory to use it in its original implementation.  Easier
to write an emulator and run it on a faster COTS machine!

Quoted text here. Click to load it

If you look back to life with 8b registers, you understand the
pain of even 32b operations.

Wider architectures make data manipulation easier.  Bigger
*address* spaces (wider address buses) make it easier to
"do more".

So, an 8b CPU with extended address space (bank switching, etc.)
can tackle a bigger (more varied) problem (at a slow rate).
But a wider CPU with a much smaller address space can handle
a smaller (in scope) problem at a much faster rate (all else
being equal -- memory speed, etc.)

When doing video games, this was a common discussion (price
sensitive); do you move to a wider processor to gain performance?
or, do you move to a faster one? (where you put the money changes)

Re: 64-bit embedded computing is here and now
Quoted text here. Click to load it

Philip Munts made a comment a while back that stayed with me: that these
days, in anything mains powered, there is usually little reason to use
an MCU instead of a Linux board.

Re: 64-bit embedded computing is here and now
On 6/9/2021 9:41 AM, Paul Rubin wrote:
Quoted text here. Click to load it

I note that anytime you use a COTS "module" of any kind, you're still
stuck having to design and layout some sort of "add-on" card that
handles your specific I/O needs; few real world devices can be
controlled with just serial ports, NICs and "storage interfaces".

And, you're now dependant on a board supplier as well as having
to understand what's on (and in) that board as they are now
critical components of YOUR product.  The same applies to any firmware
or software that it runs.

I'm sure the FAA, FDA, etc. will gladly allow you to formally
validate some other party's software and assume responsibility
for its proper operation!

Re: 64-bit embedded computing is here and now
Quoted text here. Click to load it

I have a friend who has a ceiling fan with a raspberry pi in it, because
that was the easiest solution to turning it on and off remotely...

So yeah, I agree, "with a computer" is becoming a default answer.

On the other hand, my furnace (now geothermal) has been controlled by a
linux board since 2005 or so... maybe I'm not the typical user ;-)

Re: 64-bit embedded computing is here and now
Paul Rubin wrote:
Quoted text here. Click to load it

Except that if it has a network connection, you have to patch it  
unendingly or suffer the common-as-dirt IoT security nightmares.


Phil Hobbs

Dr Philip C D Hobbs
Principal Consultant
We've slightly trimmed the long signature. Click to see the full one.
Re: 64-bit embedded computing is here and now
On 6/9/2021 20:44, Phil Hobbs wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

Those nightmares do not apply if you are in complete control of your
firmware - which few people are nowadays indeed.

I have had netMCA devices on the net for over 10 years now in many
countries, the worst problem I have seen was some Chinese IP hanging
on port 80 to no consequences.


Dimiter Popoff, TGI             http://www.tgi-sci.com

Re: 64-bit embedded computing is here and now
Dimiter_Popoff wrote:
Quoted text here. Click to load it

But if you're using a RasPi or Beaglebone or something like that, you  
need a reasonably well-upholstered Linux distro, which has to be patched  
regularly.  At very least it'll need a kernel, and kernel patches  
affecting security are not exactly rare.


Phil Hobbs

Dr Philip C D Hobbs
Principal Consultant
We've slightly trimmed the long signature. Click to see the full one.
Re: 64-bit embedded computing is here and now
Quoted text here. Click to load it

You're in the same situation with almost anything else connected to the
internet.  Think of the notorious "smart light bulbs".

On the other hand, you are in reasonable shape if the raspberry pi
running your fish tank is only reachable through a LAN or VPN.
Non-networked low end linux boards are also a thing.

Re: 64-bit embedded computing is here and now
On 6/9/2021 12:58 PM, Paul Rubin wrote:
Quoted text here. Click to load it

No, that's only if you didn't adequately prepare for such "exposure".

How many Linux/Windows boxes are running un-NEEDED services?  Have
ports open that shouldn't be?  How much emphasis was spent on ekeing
out a few percent extra performance from the network stack that
could have, instead, been spent on making it more robust?

How many folks RUNNING something like Linux/Windows in their product
actually know much of anything about what's under the hood?  Do they
even know how to BUILD a kernel, let alone sort out what it's
doing (wrong)?

Exposed to the 'net you always are at the mercy of DoS attacks
consuming your inbound bandwidth (assuming you have no contrtol
of upstream traffic/routing).  But, even a saturated network
connection doesn't have to crash your device.

OTOH, if your box is dutifully trying to respond to incoming packets
that may be malicious, then you'd better hope that response is
"correct" (or at least SAFE) in EVERY case.

For any of these mainstream OS's, an adversary can play with an
exact copy of yours 24/7/365 to determine its vulnerabilities
before ever approaching your device.  And, even dig through
the sources (of some) to see how a potential attack could unfold.
Your device will likely advertise exactly what version of the
kernel (and network stack) it is running.

[An adversary can also BUY one of YOUR devices and do the same
off-line analysis -- but the analysis will only apply to YOUR
device (if you have a proprietary OS/stack) and not a
multitude of other exposed devices]

Quoted text here. Click to load it

Exactly.  But that limits utility/accessibility.

If you only need moderate/occasional access, you can implement
a "stealth mode" that lets the server hide, "unprotected".
Or, require all accesses to be initiated from that server
(*to* the remote client) -- similar to a call-back modem.

And, of course, you can place constraints on what can be done
over that connection instead of just treating it as "God Mode".
[No, you can't set the heat to 105 degrees in the summer time;
I don't care if you happen to have appropriate credentials!
And, no, you can't install an update without my verifying
you and the update through other mechanisms...]

Re: 64-bit embedded computing is here and now
On 6/9/2021 22:22, Phil Hobbs wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

Oh if you use one of these all you can rely on is prayer, I don't
think there is *one* person knowing everything which goes on within
such a system. Basically it is impossible to know, even if you have
all the manpower to dissect all the code you can still be taken by
surprise by something a compiler has inserted somewhere etc., your
initial point is well taken here.
If you ask *me* if I am 100% sure what my devices might do - and I
have written every single bit of code running on them, which has
been compiled by a compiler I have written every single bit of - I
might still be scratching my head. We buy our silicon, you know...


Dimiter Popoff, TGI             http://www.tgi-sci.com

Site Timeline