One SBC or two?

I'd like to hear what you folks think about the following question. First, some setup. We're looking at a major upgrade of an existing product. The existing product is over a decade old and uses a 500MHz x86 SBC to do a lot of realtime (motor control and other things) and to run a primitive UI. The new product would do roughly the same realtime stuff, but would add a modern GUI, connection to useful peripherals such as printers, and some level of internet connectivity. This may well put us into Linux or some version of Windows. Due to the excruciating entanglement of the current UI and realtime, it would certainly require a major rewrite of the realtime in any case. I could tell you such stories...

This is a high-price, low volume product, so cost of the computing HW is insignificant compared to development cost and time.

Now for the question. What do you see as advantages and disadvantages to separating the functionality onto two SBCs, one for realtime and one (not necessarily the same) for GUI, etc.? Both approaches (single SBC or dual) have come up for discussion. I'd like to hear as many viewpoints on the question as I can get. Thanks.

Reply to
KK6GM
Loading thread data ...

My own preference is more or less "as many microcontrollers as it takes" rather than "let one big processor run everything." Plus, with a sensible internal network, you also get diagnostic control and monitoring capability almost for free.

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

In message , KK6GM writes

You could use two ARM parts. The same RTOS on both and the GUI on one.

Thus the same dev tools for both SBCs

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
 Click to see the full signature
Reply to
Chris H

If HW costs aren't important then I would favor a multi MCU design. It's so nice to be able to program hard realtime apps without not having to worry about other SW interfering in an RTOS. You can mix various MCUs for their different strong points. You can also make use of multiple programmers (for more cost but doubling up on programming to achieve a shorter schedule) who can program in their clearly defined domain (that is, their own chip). That requires a good bit of up-front design to make sure inter-MCU communications channels are sufficiently designed to support the application.

And for a completely different approach, perhaps you can find a good mulitcore chip that will do the job. I'm not so willing to suggest that it will make programming easier or quicker though.

JJS

Reply to
John Speth

FWIW, IME SBC's w/ GUI have high latency, which is NG for RT (Is that enough abbreviations?)

I see cost as the major disadvantage of a dual-CPU approach.

RK

Reply to
d_s_klein

Doing everything on one processor is much simpler, cheaper and in any ways better then separating the functionality.

Every other programmable device involved in the system is PITA in development, production and support. There will be a lot of work to define and maintain the interfaces. There will be inevitable compatibility issues between the different versions of softwares and hardwares for the different parts of the system. There will be development problems because "A" can't work without "B", and "B" can't work without "A"; so you will need to develop separate test setups for "A" and for "B".

So, my advice is stay with one core. If there is not enough of the real time or i/o capacity of the main CPU, use the low-level MCU devices to help with it.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Opinions are cheap, no? Here's another one. Do the real time stuff in an FPGA! Software will always be hard to debug, especially for real time stuff. Consider what happens with a processor... using a sequential language to programm a sequential processor to emulate a lot of separate functions happening in parallel. In an FPGA, programmed in an HDL, you get exactly that, parallel programs with parallel execution on parallel hardware! Every part can be debugged in isolation without impacting the rest.

I am working on a highly time critical design right now... in fact, it will be used to define time itself! I would have done the entire thing in the FPGA with only the fastest parts being done in PECL. But my friend who actually has the contract (I am doing the FPGA part for the initial phase) wants to use a CPU for the stuff that doesn't have to be in the FPGA. In this case the GUI is a clock display with buttons to set the digits, just like a clock at home. There is nothing that couldn't be done in the FPGA, but often people have the idea that an FPGA is hard to design because it is hardware. In fact, it is easier to design and debug because it is all in parallel and it can be simulated with infinite visibility.

I expect that you can find an FPGA based motor control board which will attach to whatever processor you wish to use for your GUI. This eliminates the risk of not having enough processor capability to meet your real time requirements.

Rick

Reply to
rickman

What sort of bandwidth do you need between the GUI and the real-time stuff?

My knee-jerk response is to design the software so that it works as if there were a serial link between the GUI and the real-time stuff (i.e. with well-defined commands and responses, where the responses aren't required to be either immediate or in any particular order).

Whether you do one processor or two then becomes a tactical, rather than a strategic decision. In general if it's a really complicated GUI, and if sales and marketing are going to want to continually stir the pot, then having a separate processor for the GUI means you can leave the GUI software development to a team that maybe isn't as astute with the hard real time stuff (and hope they don't manage to screw things up). OTOH, if money ever does become an object, or if your software team is small, then you can keep it all on one processor.

Note that the choices of "real time with a real big OS" options that I know all devolve down to either Windows or Linux riding on top of a 'real' RTOS, with all the real-time programming done _outside_ of the 'big box' paradigm and Windows or Linux running as a fairly low-priority task under the RTOS, or an "all in one" big RTOS like VxWorks.

If you go with Windows or Linux, you'll need that "looks like a serial interface" API between the real-time part and the GUI no matter what, so plan on it. If you use something like VxWorks, then your GUI will have to live in one or more low-priority tasks, and once again you'll need that "looks like a serial interface" API, or you'll be handing your GUI the power to break your real time control loops.

--
Tim Wescott
Wescott Design Services
 Click to see the full signature
Reply to
Tim Wescott

My furnace controller has a Linux SBC and *five* additional MCUs (one for each zone, plus one for the furnace itself) for realtime control. I put the MCUs on the I2C bus and treat them like very smart peripherals.

formatting link
formatting link

Reply to
DJ Delorie

It make sense to put all the realtime stuff (motor control) on an embedded board and have the UI separate. You could even use a desktop/laptop PC for the UI, on an off the shelve PC104+ system (ex: Versalogic.com). Then when you want to add "new" UI features in a year, (put trend data into a database, or remote monitoring over the Internet, for example), you can just update the PC software an not touch the embedded hardware. Also, with VPN and remote sessions, you could log in the UI remotely without developing any additional code.

Running Virtual OS on a multi-core hardware platform is a possible solution that allows you to split the software, but still using one SBC, but I'm not particularly keen on that approach as it seems overly convoluted and relies on some vendors Virtual Hosting approach.

2SBC Adv: + independent RT & UI version control, separate updates; different types of programmer working independently. UI could be developed without motor hardware. + UI could be almost anything from a PC (Windows, Linux, QNX) + UI flexibility allows for infinite possibility of new features in the future without disturbing the solid RT code. + Cost not an issue for low volume, high price product. 2SBC DisAdv: - Communication Interfaces: Software, (doesn't exist, have to be written) - Communication Interfaces: Hardware, (cables across the factory floor? or wireless network noise issues).

National Instruments makes a cRIO product that has an onboard SBC with VxWorks and an FPGA that would possibly work for this application, but unfortunately they mainly support LabVIEW as the programming language, which most embedded programmers, (including me), don't particularly care for.

Reply to
Anony Mous

Why not use some slow (by today's standard) and hence low power x86 SBC to run the RT part, which would simplify any backup power and cooling requirements.

If 24x7 operation is required, the combination of Internet connectivity and Windows is a bit problematic. While current Windows versions are quite stable, the Internet connection would require frequent installations of security updates, which would require frequent reboots. If the RT part is on a separate processor, it could ride through the UI side reboots.

Does the current UI consist of mechanical switches, potentiometers and indicator lights or some on screen simulation of these ?

Would it be possible to leave the current UI as a backup and the simply letting the new UI send (e.g. via serial line) commands simulating the actual button operations ? Of course, handling of the most complex functionality should be removed from the RT side and replaced with a sequence of simpler commands.

With such products, existing and functioning parts should be reused as much as possible, which may cause some ugly interfacing situations. However, if the expected new product family life time is a decade or more, it might make sense to improve the interface between the systems.

Keeping the RT and UI systems as separate is a good idea, if the RT system can remain the same for the rest of the product family life time, but it may be necessary to upgrade the UI side 1-2 times during the same period (e.g. due to consumer demand) both HW and/or SW.

Reply to
upsidedown

Thanks for the replies so far. One thing I should make clear (many of you have alluded to it) is that if we do go with one processor, there will be an "as-if-through-a-wire" interface between the two sections. The GUI will have a very controlled and limited view of the innards of the realtime component, and vice versa. So the effort to develop an interface is probably close to a wash between the one and two processor approaches.

Reply to
KK6GM

WinSystems.com has done a fair job of getting the 'horsepower' up lately so a single SBC might not be as slow at handling a more comples set of tasks and they (probably others too) have an evaluation system so you could check out a bd or 2 for probably not much more than shipping costs.

Reply to
1 Lucky Texan

s
e

In fact I am completely open to using an FPGA where it would be appropriate. In fact I would go further and say that it's very likely there will be an FPGA in the new realtime design - just a question of in what capacity. Now the real question is, VHDL or Verilog? :)

Reply to
KK6GM

Hz

d

ges

gle

f
t

quoted text -

Hmmm, my "in fact" key seems to be stuck in the down state...

Reply to
KK6GM

KK6GM schrieb:

Since modern SBCs are likely to be multi-core processors I would try to get away with on SBC and dedicate one core to the GUI part and one core to the real-time processing. But maybe there are some folks here around that have more experience with these kind of designs than me...

Greetz, Sebastian

Reply to
Sebastian Doht

You say that the product is high value, low volume. In which case the software development costs will be lessened if you adopt a sensible architecture with multiple processors. You didn't state how many actuators/motors are attached to the system but I tend to favour one processor per actuator keeping the individual systems very simple. Usually smaller less expensive processors can be used. A suitable network to connect them along with some interlocking will usually keep the various systems in sync and safe.

--
********************************************************************
Paul E. Bennett...............
 Click to see the full signature
Reply to
Paul E. Bennett

Like others I would favour separating into multiple processors. This mirrors the industrial approach with PLCs and HMIs.

Besides the advantage of being able to update the GUI relatively easily and separately you can do the same with the control IE you could change the motors from HALL sense commutated BLDC to sensorless switched reluctance with a 'simple' change of drive control. Or you could change the size of the motors and retain the same GUI.

It does, however, place high value on defining the network interface well at the beginning. Ways to get ID information and status from the real-time components. Control definitions that are not limited (either defined as unitless [motor speed as percent of full speed] so they scale with the peripheral or with units that don't saturate easily [motor speed as rpm in some sort of floating point representation]). And consider adding signals out of band to the normal communications such as stop etc... And make sure the definition is expandable so it can grow over time.

The other advantage of having the GUI separate is that it is a lot easier to make it remote, or provide additional status displays.

Whether you can easily separate the various realtime co-ordinates will depend on how closely they have to be synchronized. The latency requirements are stiffer for controlling a pair of motors driving an x/ y table to follow an arc then they are for a motor running a mixer and a second running a feed screw.

Robert

Reply to
Robert Adsett

f
t

As we speak, er, write, I am helping a friend with a similar issue. He is a good embedded engineer and has done small PLD designs before, but nothing as grand as an FPGA, if you can call FPGA design "grand". He asked me to get him up and running on a project he has on a short fuse. I am writing the code for the first phase for him and will turn it over to him for the subsequent phases with a little hand holding. I asked a few questions and decided it would be best if his project were done in Verilog. My expertise in Verilog is not as good as VHDL, but I sincerely believe that VHDL would take him much more time to come up to speed on. My only reservation with Verilog is that there are a number of ways that you can shoot yourself in the foot without even knowing it until most of your blood has drained out. That can be a problem with a relative newbie.

As for someone who has the time to learn properly and isn't afraid of some strong typing (just how much of a man are you anyway?) I still prefer VHDL at this point. But ask me again when I am done with this project.

I am pretty certain that your work will move ahead more quickly and give you a lower cost, lower power and smaller size solution using a combined FPGA - CPU approach than a pure CPU approach will. There are a number of very low power and tiny x86 CPU boards out there. If you move the real time work off the CPU, the UI becomes very easy. With tons of parallel processing, the real time work is very easy in an FPGA.

BTW, if you find you want any help with the FPGA, that is my specialty. If you can't find a board that does what you want, I am sure I can whip something up for you pretty easily.

Rick

Reply to
rickman

uff

d
t

But

t,

is

y

Thanks again for the comments. I was just having a bit of fun with the VHDL v. Verilog comment - I've seen some of your posts on the various fpga newsgroups so I knew it was a bit of a hot topic for you. I'm already working to build up some competency in VHDL. By way of background I think that Ada is far superior to C and C++, so perhaps you can understand my choice in HDL. But I'm derailing my own thread, so I'll be quiet now.

Reply to
KK6GM

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.