Embedded Linux processors

I was idly looking to see what was out there in the low end Linux space - something bigger than an ESP32 but more production friendly than a Raspberry Pi. I came across this excellent guide:

formatting link
He builds dev boards for 10 different chips from 7 vendors, just to see how it all goes - both hardware and software. The results are quite interesting.

Any other recommendations for Linux-supporting SoCs that are nice for low volume/hand production?

Theo

Reply to
Theo
Loading thread data ...

The key things to determine are what you consider "production friendly", and what you need. You want a module, not a chip. Some modules come with pins for connections, others with just solder pads, and some are made to fit SO-DIMM sockets or similar connectors. Some modules have Ethernet, Wifi, Bluetooth, HDMI, USB, and other high-speed interfaces - others have much less. Some have on-board eMMC or other NAND flash, others rely on external memory or SD-Cards. Some have their power supply handling on board and need just a single supply at 3.3v or 5v, others need multiple supplies at different levels with careful bringup. Some have long lifetimes and will be available for a decade, others are from companies that might be gone next month. Some have excellent support from the supplier, some have excellent community support, and others have virtually no support at all.

We don't know anything about the product, its needs, or about what you can do yourself and what you need supplied. All I can give you is general advice here regarding things to consider. And be wary of trying to get minimal cost for the module - it can easily cost you more in the long run. (Equally, high price of a module is no guarantee.)

There are many people making SoC's that can work well with Linux, mostly ARM Cortex-A but also some RISC-V now. (There are also PPC, MIPS, and a few other cores, but those are in more specialised devices like network chips.) There are no SoC's that are remotely suitable for hand production.

Another thing to consider, of course, is whether a Linux module is what you really want. There are microcontrollers that are more powerful than ESP32 devices, such as NXP's i.mx RT line (with 500-1000 MHz Cortex-M7 cores). On the software side, there is Zephyr which sits somewhere between FreeRTOS and Linux and might be useful. (I haven't tried Zephyr myself.)

Reply to
David Brown

I wouldn't assume that (though the OP will have to clarify). Pi's are fine for prototyping, but there are many reasons why they might not be a suitable choice for real products. However, that does not at all suggest that it is a good idea to use chips directly rather than modules.

Unless your production runs are at least 10,000 a time, it is unlikely to be cost-effective to use anything other than pre-populated modules. Designing a board for large ball count BGAs, high speed memories, etc., is not quick or cheap, nor is their production.

That could have been good advice - twenty years ago.

Now it is pointless to aim for such a minimal system. The cheapest processors with MMU supported by Linux cost a few dollars. The cheapest non-MMU microcontrollers that are capable of supporting Linux are at least ten dollars. Swap has always been optional, but working without an MMU leads to a lot of complications and restrictions (such as no "fork" calls). No one uses non-MMU Linux except for nerdy fun. (And fun is /always/ a good reason for doing something.)

Reply to
David Brown

The above article covers all of those things in a nice way: some parts are in 64 pin QFNs, some are in 0.8mm BGA which he reckons is doable to hand solder (I haven't tried that...). Some have abandonware software stacks, others are in the mainline Linux tree. etc etc

I don't have a product :-) But really just making a thought experiment about what would happen if I did have a product - let's say an IoT thingy (wifi, display, etc) in the <$100 sticker price, initial volumes let's say hundreds.

The ESP32s are nice as they're a simple, cheap, wifi module. If you wanted to cut costs you could use the bare chip. The Pis aren't: the Zero is a nice form factor, but you can't buy it in volume. The regular Pis can't really be mounted on a custom PCB if you don't have a large enclosure. The Compute Modules are better, but still larger than an ESP32. However you can't really buy any of them at the moment, and if you could they would be quite expensive. The Pi2040 is an ok microcontroller but nothing special (and wifi is an extra chip). Also none of them have any protection from someone changing or stealing your firmware.

It is interesting in the above article how much the complexity starts to rise once you start going beyond a single chip solution: BGAs, DDR routing, numerous power supplies and sequencing, etc.

Some of the SIPs and BGAs in the article above are, allegedly. However 'hand production' is really a proxy for production complexity. If you can build a 4 layer board and hand-mount it, you can build in low-ish volume on a relatively cheap pick and place line. If you need a 10 layer board and package-on-package BGA mounting equipment, you can't do that without a much greater volume to amortise the tooling costs.

Systems on module are a good solution to that but, if some of these SoCs are niche, the modules are even more niche (hard to buy in small quantities, produced by a tiny company, and so on).

The iMX RT isn't one I've come across, thanks. That's the kind of thing I'm interested in.

The software side is one that's frequently neglected: one thing the Raspberry Pi folks are really good at is maintaining their software stack. A lot of other (Chinese) Linux SoC vendors basically throw it all over the wall and let the customers do the maintenance. In some ways it's nice not to play in that space. OTOH once you get beyond a certain point it's nice to be able to use 'grown up' tools (like a webserver that can easily do TLS, not some stripped down microcontroller TLS stack that only does TLS 1.1 and can't fit any more in RAM, or worse does no TLS at all).

I'm really mainly curious how this middle part of the market goes, and wondering how others feel about it.

Theo

Reply to
Theo

The reason people use Linux is for the software stacks. It allows you to write in a more friendly language, have better libraries for doing complicated things, use existing tooling, not have to worry about boring housekeeping things like the networking (does your thing support IPv6? Linux has for decades, does your homebrew embedded RTOS? What about WPA3?). Can you interact securely with whatever cloud service your widget needs to do its thing? (especially if that service is not designed specifically for talking to low-end widgets)

Essentially you trade off ease of software development for hardware complexity. If you're playing in the low volume game, development effort and time to market is more important than saving cents on production costs. If you're selling by the million the tradeoff is different.

If you want to run <tool> and that needs a filesystem, yes you do. I'm sure you could reimplement it to do without, but that takes effort.

That depends on the app. The point here is to be able to use existing software without having to re-engineer it. Once you start re-engineering things, that's where your time goes.

Indeed, which is why microcontrollers have various secure boot and encrypted firmware support.

(which aren't perfect, but prevent somebody just pulling your flash chip and reading it out)

Indeed, no black magic, just time and cost. Don't do it if you don't need it.

The thing here is choosing your battles. Spend your time on the things that add value to the product. Don't make life needlessly harder when that's not necessary. Everything *can* be done, but some things shouldn't *need* to be done. If you're in the high-volume game, saving $1m using cheaper parts makes sense. If you're in the low-volume game, you might only save $1000 but spend $10K in time doing so.

Theo

Reply to
Theo

If you are doing all this for fun and learning, where your own time is free and reliability is not an issue, then you can do some of this by hand. If you are trying to make a product to sell to others and turn a profit, it's a completely different situation.

BGA's are okay to place by hand, but getting a good, even soldering result with kitchen-top tools is unlikely. At best, you'll get something that works for a while - but put it to real use and the voids, half-contacts, partial short-circuits and other flaws will cause failures sooner or later as thermal stresses wear them out. And then you have the 0.5 mm pitch QFN packages, the 0402 chicken feed components, and all the rest of it.

And if this is professional, don't forget the testing and certification you need, depending on where you are selling it - things like EMC testing and radio emission regulations. If your home made device has Wifi or Bluetooth, and you want to sell it, the certification process will cost you hundreds of thousands of dollars (especially since you haven't a hope in hell of passing the tests when you do home production).

But it can certainly be fun as a hobby and to get a better understanding about how all this works.

Yes - they are often a first choice for when you want Wifi and/or Bluetooth.

No, you can't. You can't design a working Wifi module for the price of

100 ESP32 modules, assuming you value the hours spent appropriately for an electronics engineer. You can't produce a working Wifi module in your kitchen or garage, because the required components are too small to handle by hand. And that's before you try and certify the thing so that it is legal to sell.

At my company, we have experienced electronics designers with top-class design software. We have production facilities for high-speed automated production of low and mid volume production runs, capable of placing and soldering parts that are barely visible to the naked eye, with optical and x-ray inspection systems. We would not consider making a product with Wifi using bare chips - we would use ready-made modules. If we can't do it, /you/ can't do it.

Of course you can order them in volume. More appropriate, perhaps, are the Pi compute modules - which you can also order in volume. You are asking for hundreds, while distributors will happily take orders in tens of thousands for these.

However, like almost everything else in the electronics industry these days, you'll be hard pushed to find much stock of Pis, or any other Linux module, Linux-capable SoC, or the other components involved. So if you need something in the short term, take whatever you can find in stock.

Hopefully the current component shortage situation won't last forever, and then you'll be able to order Pi Zeros and Pi Compute Modules in whatever quantity suits.

Linux systems are /never/ a single chip solution. And yes, it is can often be the other chips that are the biggest challenges - or their supporting small components.

That is partly correct, partly misunderstanding.

The board layer count affects the cost of the pcb itself, and the effort (and tools) required for the design. It doesn't affect the board manufacturing (you don't make the pcb yourself), although it can limit the suppliers that can make it for you.

If you want to make professional quality boards and sell them, then you do not do it with hand mounting - even if some guy on the internet says it's possible. If you don't have the volumes involved to have the production tools needed for automated pick and place, optical inspection, proper solder ovens, etc., outsource the board production. There is no shortage of companies who will do this even for runs of hundreds of boards - you can choose between more local suppliers that will have well-trained staff that will work with you to improve the design, all the way to anonymous far-eastern companies that will work cheaply and give you exactly what you ask for, mistakes and all.

There are some kinds of boards that are fine for small scale manufacturing with simple machines - Linux boards are not one of them. A base board for mounting a Linux module, might be a lot more practical for your own production.

The niche SoCs are not normally on modules. The people who buy a SoC with MIPS or PPC cores do so because they are making massive network switches, car engine controllers, and the like.

The more "fun" parts in the family are fair sized BGA's. They are a nice group of parts.

IMHO the "encrypt everything" movement is a silly idea and a massive waste of effort and resources. Sure, you want your bank website traffic to use SSL, but it is completely unnecessary for the great majority of web traffic.

But I agree that sometimes it is nice to have plenty of resources in your embedded system, whatever you use them for.

Reply to
David Brown

I didn't, no - I was responding to what /you/ wrote in reply to what the OP asked. That was the relevant issue. (I've now read the article, and it has not changed my opinions significantly.)

I do have experience at it, yes. And it takes knowledge, tools, and time. I didn't say the OP could not do it - I don't know his abilities. I said it was not cost-effective.

Is that a trick question? You don't use Linux.

Yes, or for which it is practical to make a build that could be used in a real system (as distinct from just for fun and bragging rights, such as the guy who got Linux "running" on an AVR).

I don't understand what you are trying to say here. Are we to guess what /looks/ like Linux, but /isn't/ Linux? You think people who want embedded Linux would be happy with a BSD? (Some might, but certainly not all.) Or a Windows system with WSL? Or FreeRTOS and LWIP with POSIX-style socket APIs?

Fork /always/ has to create a /logical/ copy of the parent process - that's what it does. Without an MMU, all /writeable/ memory areas need to be duplicated at the fork by full copy, whereas with an MMU the pages are marked "copy on write" and only actually duplicated when needed. ("fork" existed before MMU processors were used for *nix.)

In MMU-less Linux, "fork" is simply not supported as it would be too inefficient and complicated. You need to use vfork() then execve(), or posix_spawn(), or clone(), with certain restrictions.

It is one of the biggest headaches when porting real Linux software to MMU-less Linux. It has become less of an issue for some software, because it has become more common to write programs that can run on Windows as well as Linux, and Windows does not support "fork()" either.

You are talking about an MPU (memory protection unit), not an MMU (memory management unit). MPU's are common on 32-bit microcontrollers, and let you restrict access to different parts of memory.

MMU's are used to change the mapping between logical addresses used by code, and physical addresses used by the hardware. They provide many functions in addition to supporting "fork()", such as giving applications a contiguous view of memory despite fragmentation in the physical memory, and letting shared libraries have different physical and logical addresses.

An MMU makes life massively simpler, more flexible and more efficient in a "big" OS where different programs are loaded and run at different times.

Yes - people did use it before, and now they don't. The day it becomes inconvenient to continue the support for it in the kernel, will be the day it gets dropped.

Reply to
David Brown

The "encrypt everything" movement is not just silly, it is *s**te*. And it is not just about the web, if goes also for mail etc. It is OK to have the encryption _capability_ but doing it all over the place is just a way to push the sales of more silicon. They used to do this by just bloating software so PC-s would become "old" within <5 years; now that they have tens of *giabytes* of RAM they need a way to justify selling even more. Overall may be not a bad thing, this has kept the industry advancing, but to those who can see how things work it looks not just silly, it looks.... (OK, here comes the Irish/Scottish word again).

Reply to
Dimiter_Popoff

That's a reasonable argument, on the surface. But like many such simplistic rules, it discourages thinking, knowledge, nuances and appropriate usage. It is much like "zero tolerance" rules - they mean "zero thought" and often throw out the baby with the bath water.

Different types of communication or storage have different requirements, and the benefits and costs of encryption are correspondingly varied. There are /many/ costs to using encryption - not just processor cycles or code and ram space. There's complexity in the code and the scope for bugs, the near impossibility of debugging or monitoring traffic or recovering data in encrypted storage, and the need to handle ever-changing standards and expiring keys and certificates.

And while it might appear that "encrypt everything" means that even those that don't really understand the issues will still make "safe" systems because they use encryption by default, it is simply not true. Those who don't understand the appropriate security needs for a particular use-case are unlikely to use /appropriate/ encryption, and can easily get it wrong (such as poor handling of the keys). And now instead of saying "I don't understand this, I'll ask someone who does", they will think "it's all encrypted and therefore secure". They'll think their website is safe because it uses TLS, without considering that the bad guys can connect on the same encrypted links and hack in with the same weak passwords - only now as their traffic is encrypted, it's harder to track them.

Reply to
David Brown

Here are a few SOM I've looked at, trying to avoid SOC difficulties:

formatting link
firms I've worked with have been happy with Toradex (for new designs use Verdin family):
formatting link
end:
formatting link
I guess I'm a wimp, but I really don't want to deal with DDR routing and EMC issues for small runs...

Reply to
Dave Nadler

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.