Makefile or not?

What do you really use for embedded projects? Do you use "standard" makefile or do you rely on IDE functionalities?

Nowadays every MCU manufacturers give IDE, mostly for free, usually based on Eclipse (Atmel Studio and Microchip are probably the most important exception). Anyway most of them use arm gcc as the compiler.

I usually try to compile the same project for the embedded target and the development machine, so I can speed up development and debugging. I usually use the native IDE from the manufacturer of the target and Code::Blocks (with mingw) for compilation on the development machine. So I have two IDEs for a single project.

I'm thinking to finally move to Makefile, however I don't know if it is a good and modern choice. Do you use better alternatives?

My major reason to move from IDE compilation to Makefile is the test. I would start adding unit testing to my project. I understood a good solution is to link all the object files of the production code to a static library. In this way it will be very simple to replace production code with testing (mocking) code, simple prepending the testing oject files to static library of production code during linking.

I think these type of things can be managed with Makefile instead of IDE compilation.

What do you think?

Reply to
pozz
Loading thread data ...

I sometimes use the IDE project management to start with, or on very small projects. But for anything serious, I always use makefiles. I see it as important to separate the production build process from the development - I need to know that I can always pull up the source code for a project, do a "build", and get a bit-perfect binary image that is exactly the same as last time. This must work on different machines, preferably different OS's, and it must work over time. (My record is rebuilding a project that was a touch over 20 years old, and getting the same binary.)

This means that the makefile specifies exactly which build toolchain (compiler, linker, library, etc.) are used - and that does not change during a project's lifetime, without very good reason.

The IDE, and debugger, however, may change - there I will often use newer versions with more features than the original version. And sometimes I might use a lighter editor for a small change, rather than the full IDE. So IDE version and build tools version are independent.

With well-designed makefiles, you can have different targets for different purposes. "make bin" for making the embedded binary, "make pc" for making the PC version, "make tests" for running the test code on the pc, and so on.

I would not bother with that. I would have different variations in the build handled in different build tree directories.

It can /all/ be managed from make.

Also, a well-composed makefile is more efficient than an IDE project manager, IME. When you use Eclipse to do a build, it goes through each file to calculate the dependencies - so that you re-compile all the files that might be affected by the last changes, but not more than that. But it does this dependency calculation anew each time. With make, you can arrange to generate dependency files using gcc, and these dependency files get updated only when needed. This can save significant time in a build when you have a lot of files.

Reply to
David Brown

Fortunately modern IDEs separate well the toolchain from the IDE itself. Most manufacturers let us install the toolchain as a separate setup. I remember some years ago the scenario was different and the compiler is "included" in the IDE installation.

However the problem here isn't the compiler (toolchain) that nowadays is usually arm-gcc. The big issue is with libraries and includes that the manufacturer give you to save some time in writing drivers of peripherals. I have to install the full IDE and copy the interesting headers and libraries in my folders.

Another small issue is the linker script file that works like a charm in the IDE when you start a new project from the wizard. At least for me, it's very difficult to write a linker script from the scratch. You need to have a deeper understanding of the C libraries (newlib, redlib, ...) to write a correct linker script. My solution is to start with IDE wizard and copy the generated linker script in my make-based project.

Could you explain?

Yes, this is sure!

Reply to
pozz

You can do that do some extent, yes - you can choose which toolchain to use. But your build process is still tied to the IDE - your choice of directories, compiler flags, and so on is all handled by the IDE. So you still need the IDE to control the build, and different versions of the IDE, or different IDEs, do not necessarily handle everything in the same way.

That's fine. Copy the headers, libraries, SDK files, whatever, into your project folder. Then push everything to your version control system. Make the source code independent of the SDK, the IDE, and other files - you have your toolchain (and you archive the zip/tarball of the gnu-arm-embedded release) and your project folder, and that is all you need for the build.

Again, that's fine. IDE's and their wizards are great for getting started. They are just not great for long-term stability of the tools.

You have a tree something like this:

Source tree:

project / src / main drivers

Build trees:

project / build / target debug pctest

Each build tree might have subtrees :

project / build / target / obj / main drivers project / build / target / deps / main drivers project / build / target / lst / main drivers

And so on.

Your build trees are independent. So there is no mix of object files built in the "target" directory for your final target board, or the "debug" directory for the version with debugging code enabled, or the version in "pctest" for the code running on the PC, or whatever other builds you have for your project.

Of course, if build times are important, you drop Windows and use Linux, and get a two to four-fold increase in build speed on similar hardware. And then you discover ccache on Linux and get another leap in speed.

Reply to
David Brown

Gnu makefiles.

And they're almost all timewasting piles of...

If you're going to use an IDE, it seems like you should pick one and stick with it so that you get _good_ at it.

I use Emacs, makefiles, and meld.

How awful.

I've tried IDEs. I've worked with others who use IDEs and watched them work, and compared it to how I work. It looks to me like IDEs are a tremendous waste of time.

--
Grant Edwards               grant.b.edwards        Yow! ... this must be what 
                                  at               it's like to be a COLLEGE 
                              gmail.com            GRADUATE!!
Reply to
Grant Edwards

It impossible to overemphasize how important that is. Somebody should be able to check out the source tree and a few tools and then type a single command to build production firmware. And you need to be able to _automate_ that process.

If building depends on an IDE, then there's always an intermediate step where a person has to sit in front of a PC for a week tweaking project settings to get the damn thing to build on _this_ computer rather than on _that_ computer.

And in my experience, IDEs do not. The people I know who use Eclips with some custom-set-of-plugins spend days and days when they need to build on computer B insted of computer A. I just scp "build.sh" to the new machine and run it. It contains a handful of Subversion checkout commands and a "make". And I can do it remotely. From my phone if needed.

Yes! Simply upgrading the OS often seems to render an IDE incapable of building a project: another week of engineering time goes down the drain tweaking the "project settings" to get things "just right".

--
Grant Edwards               grant.b.edwards        Yow! JAPAN is a WONDERFUL 
                                  at               planet -- I wonder if we'll 
                              gmail.com            ever reach their level of 
                                                   COMPARATIVE SHOPPING ...
Reply to
Grant Edwards

One approach is to put the tools into a VM or a container (eg Docker), so that when you want to build you pull the container and you get an identical build environment to the last time anyone built it. Also, your continuous integration system can run builds and tests in the same environment as you're developing on.

Unfortunately vendors have a habit of shipping IDEs for Windows only, which makes this harder. It's not so much of a problem for the actual compiler - especially if that's GCC under the hood - but ancillary tools (eg configuration tools for peripherals, flash image builders, etc), which are sometimes not designed to be scripted.

(AutoIt is my worst enemy here, but it has been the only way to get the job done in some cases)

Decoupling your build from the vagaries of the IDE, even if you can trust that you'll always build on a fixed platform, is still a good thing - many IDEs still don't play nicely with version control, for example.

Theo

Reply to
Theo Markettos

We use cmake for that--it allows unit testing on a PC, as you say, and also automates the process of finding libraries, e.g. for emulating peripherals.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

al

Second that!

We to development in and deliver VMs to customers now, so they are CERTAIN to receive exactly the 'used for production build' versions of every tool, library, driver required for JTAG gizmo, referenced component, etc, etc, et c. Especially important when some tools won't work under latest version of Winbloze! Saves enormous headaches sometime down the road when an update mu st be made...

Hope that helps, Best Regards, Dave

Reply to
Dave Nadler

+1 on those. My memory isn't good enough any more to remember all the byzantine steps through an IDE to re-complete all the tasks my projects require.

Especially since each MCU seems to have a *different* IDE with

*different* procedures to forget...

And that's assuming they run on Linux in the first place ;-)

Reply to
DJ Delorie

The most important rule to remember is:

Never, ever, use any software written or provided by the silicon vendor. Everytime I've failed to obey that rule, I've regretted it.

I've heard rumors that Intel at one time wrote a pretty good C compiler for x86.

However, having used other development software from Intel, I find that impossible to believe. [Acually, Intel MDS-800 "blue boxes" weren't bad as long as you ran CP/M on them insteaod of, ISIS.]

And don't get me started on compilers and tools from TI, Motorola, or various others either...

Some of them have put some effort into getting good Gnu GCC and binutils support for their processors, and that seems to produce good results. If only they had realized that's all they really needed to do in the _first_ place...

--
Grant Edwards               grant.b.edwards        Yow! Can you MAIL a BEAN 
                                  at               CAKE? 
                              gmail.com
Reply to
Grant Edwards

That is possible, but often more than necessary. Set up your build sensibly, and it only depends on the one tree for the toolchain, and your source code tree. It should not depend on things like the versions of utility programs (make, sed, touch, etc.), environment variables, and that kind of thing.

Sometimes, however, you can't avoid that - especially for Windows-based toolchains that store stuff in the registry and other odd places.

That is thankfully rare these days. There are exceptions, but most major vendors know that is a poor habit.

Yes, these are more likely to be an issue. Generally they are not needed for rebuilding the software - once you have run the wizards and similar tools, the job is done and the generated source can be preserved. But it can be an issue if you need to re-use the tools for dealing with changes to the setup.

Often IDE's have good integration with version control for the source files, but can be poor for the project settings and other IDE files. Typically that sort of thing is held in hideous XML files with thoughtless line breaks, making it very difficult to do comparisons and change management.

Reply to
David Brown

IDE's are extremely useful tools - as long as you use them for their strengths, and not their weaknesses. I use "make" for my builds, but I use an IDE for any serious development work. A good quality editor, with syntax highlighting, navigation, as-you-type checking, integration with errors and warnings from the builds - it is invaluable as a development tool.

Reply to
David Brown

How does it automate finding emulation libraries? That sounds like a cool feature.

We use GNU Makefiles, but we handle the matching up of emulation libraries with the real thing by hand. We then typically use different source directories for emulation libraries and actual drivers.

Greetings,

Jacob

--
A password should be like a toothbrush. Use it every day; 
change it regularly; and DON'T share it with friends.
Reply to
Jacob Sparre Andersen

Il 03/12/2018 12:57, David Brown ha scritto:> On 03/12/18 12:13, pozz wrote: >> Il 03/12/2018 11:06, David Brown ha scritto: >>> On 03/12/18 09:18, pozz wrote: >>>> What do you really use for embedded projects? Do you use "standard" >>>> makefile or do you rely on IDE functionalities? >>>> >>>> Nowadays every MCU manufacturers give IDE, mostly for free, usually >>>> based on Eclipse (Atmel Studio and Microchip are probably the most >>>> important exception). >>>> Anyway most of them use arm gcc as the compiler. >>>> >>>> I usually try to compile the same project for the embedded target and >>>> the development machine, so I can speed up development and debugging. I >>>> usually use the native IDE from the manufacturer of the target and >>>> Code::Blocks (with mingw) for compilation on the development machine. >>>> So I have two IDEs for a single project. >>>> >>>> I'm thinking to finally move to Makefile, however I don't know if it is >>>> a good and modern choice. Do you use better alternatives? >>>> >>> >>> I sometimes use the IDE project management to start with, or on very >>> small projects. But for anything serious, I always use makefiles. I >>> see it as important to separate the production build process from the >>> development - I need to know that I can always pull up the source code >>> for a project, do a "build", and get a bit-perfect binary image that is >>> exactly the same as last time. This must work on different machines, >>> preferably different OS's, and it must work over time. (My record is >>> rebuilding a project that was a touch over 20 years old, and getting the >>> same binary.) >>> >>> This means that the makefile specifies exactly which build toolchain >>> (compiler, linker, library, etc.) are used - and that does not change >>> during a project's lifetime, without very good reason. >>> >>> The IDE, and debugger, however, may change - there I will often use >>> newer versions with more features than the original version. And >>> sometimes I might use a lighter editor for a small change, rather than >>> the full IDE. So IDE version and build tools version are independent. >>> >>> With well-designed makefiles, you can have different targets for >>> different purposes. "make bin" for making the embedded binary, "make >>> pc" for making the PC version, "make tests" for running the test code on >>> the pc, and so on. >> >> Fortunately modern IDEs separate well the toolchain from the IDE itself. >> Most manufacturers let us install the toolchain as a separate setup. I >> remember some years ago the scenario was different and the compiler is >> "included" in the IDE installation. >> > > You can do that do some extent, yes - you can choose which toolchain to > use. But your build process is still tied to the IDE - your choice of > directories, compiler flags, and so on is all handled by the IDE. So > you still need the IDE to control the build, and different versions of > the IDE, or different IDEs, do not necessarily handle everything in the > same way. > >> However the problem here isn't the compiler (toolchain) that nowadays is >> usually arm-gcc. The big issue is with libraries and includes that the >> manufacturer give you to save some time in writing drivers of peripherals. >> I have to install the full IDE and copy the interesting headers and >> libraries in my folders. > > That's fine. Copy the headers, libraries, SDK files, whatever, into > your project folder. Then push everything to your version control > system. Make the source code independent of the SDK, the IDE, and other > files - you have your toolchain (and you archive the zip/tarball of the > gnu-arm-embedded release) and your project folder, and that is all you > need for the build. > >> >> Another small issue is the linker script file that works like a charm in >> the IDE when you start a new project from the wizard. >> At least for me, it's very difficult to write a linker script from the >> scratch. You need to have a deeper understanding of the C libraries >> (newlib, redlib, ...) to write a correct linker script. >> My solution is to start with IDE wizard and copy the generated linker >> script in my make-based project. >> > > Again, that's fine. IDE's and their wizards are great for getting > started. They are just not great for long-term stability of the tools. > >> >>>> My major reason to move from IDE compilation to Makefile is the test. I >>>> would start adding unit testing to my project. I understood a good >>>> solution is to link all the object files of the production code to a >>>> static library. In this way it will be very simple to replace production >>>> code with testing (mocking) code, simple prepending the testing oject >>>> files to static library of production code during linking. >>>> >>> >>> I would not bother with that. I would have different variations in the >>> build handled in different build tree directories. >> >> Could you explain? >> > > You have a tree something like this: > > Source tree: > > project / src / main > drivers > > Build trees: > > project / build / target > debug > pctest > > Each build tree might have subtrees : > > project / build / target / obj / main > drivers > project / build / target / deps / main > drivers > project / build / target / lst / main > drivers > > And so on. > > Your build trees are independent. So there is no mix of object files > built in the "target" directory for your final target board, or the > "debug" directory for the version with debugging code enabled, or the > version in "pctest" for the code running on the PC, or whatever other > builds you have for your project. Ok, I got your point and I usually arrange everything similar to your description (even if I put .o, .d and .lst in the same target-dependent directory). I also have to admit that all major IDEs nowadays arrange output files in this manner.

Anyway testing is difficult, at least for me.

Suppose you have a simple project with three source files: main.c, modh.c and modl.c (of course you have modh.h and modl.h).

Now you want to create a unit testing for modh module that depends on modl. During test modl should be replaced with a dummy module, a mocking object. What is your approach?

In project/tests I create a test_modl.c source file that should be linked against modh.o (the original production code) and project/tests/modl.o, the mocking object for modl.

One approach could be to re-compile modh.c again during test compilation. However it's difficult to replace main modl.h with modl.h from mocking object in the test directory. modh.c will have a simple

#include "modl.h"

directive and this will point to modl.h in the *same* directory. I couldn't be able to instruct the compiler to use modl.h from tests directory.

Moreover it could be useful to test the same object generated during production. I found a good approach. The production code is compiled all in a static library, libproduct.a. The tests are compiled against static library. The following command, run in the project/tests/ folder

gcc test_modh.o modl.o libproduct.a -o test_modh.exe

should generate a test_modh.exe with mocking object for modl and the

*same* modh object code of production.
Reply to
pozz

[Difficult to apply that rule for an FPGA (except some Lattice parts).]

Also, ARM seems to require that its licensee support CMSIS. This truly excellent idea seems to be terribly poorly thought-out and implemented. You get header files that pollute your program namespace with hundreds or thousands of symbols and macros with unintelligible names, many of which are manufacturer-specific not even CMSIS-related.

I know there's opencm3 which seems to be better, but still...

Standard APIs like CMSIS need *very* disciplined design and rigorous management to minimise namespace pollution. Unfortunately we don't seem to be there, yet, unless I've missed something major.

How do people handle this?

Clifford Heath.

Reply to
Clifford Heath

True

You're putting that mildly. I recently development some firmware for an NXP KL03 (Cortex-M0) part. It's a tiny part with something like

8KB of flash and a coule hundred bytes of RAM. Of course NXP provides IDE based "sample apps" that take up a gigabyte of disk space and includes CMSIS (which itself is hundreds (if not thousands) of files which define APIs for all of the peripherals that comprise layer upon layer of macros calling macros calling functions calling functions full of other macros calling macros. Trying to build even an empty main() using the CMSIS libraries resulted in executable images several times larger than available flash.

I finally gave up and tossed out everything except a couple of the lowest level include files that defined register addresses for the peripherals I cared about. Then I wrote my own functions to access peripherals and a Makefile to build the app.

In the end, I cursed myself for forgetting the rule of "no silicon vendor software". It would have been faster to start with nothing and begin by typing register addresses from the user manual into a .h file.

Yep, CMSIS is spectacularly, mind-numingly awful.

Lots of teeth-gritting and quiet swearing.

--
Grant Edwards               grant.b.edwards        Yow! Mr and Mrs PED, can I 
                                  at               borrow 26.7% of the RAYON 
                              gmail.com            TEXTILE production of the 
                                                   INDONESIAN archipelago?
Reply to
Grant Edwards

Bourbon in general, though I have it on authority that a nice rum daiquiri is also quite effective.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com 
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

an I

YON

the

?

About CMSIS, it is wonderfull if you use only the absolutely neccessary fil es. I always extract from the gigabyte only the core_xxx.h files, and the single header file with the register definitions for the microcontr oller. For example: core_cm0.h core_cmInstr.h core_cmFunc.h stm32f091xc.h That's simply the CMSIS for the STM32F091 chip in use. In fact, the core_xxx files are already the same for an architecture (cm0, cm3 etc.). You only need the .h file for your chip registers.

Reply to
raimond.dragomir

There is a balance here - you can keep the good parts, and drop the bad parts. But sometimes it takes effort, and sometimes keeping a few bad parts is more practical.

Manufacturer-provided headers for declaring peripherals are usually very convenient and save a lot of work. The same applies to the CMSIS headers for Cortex internal peripherals, assembly function wrappers, etc.

On the other hand, the "wizard" and "SDK" generated code is often appalling, with severe lasagne programming (a dozen layers of function calls and abstractions for something that is just setting a peripheral hardware register value).

I also find startup code and libraries can be terrible - they are often written in assembly simply because they have /always/ been written in assembly, and often bear the scars of having been translated from the original 6805 assembly code (or whatever) through 68k, PPC, ARM, etc., probably by students on summer jobs.

I can relate to your "SDK uses more code than the chip". I had occasion to use a very small Freescale 8-bit device a good number of years ago. The device had 2K or so of flash. The development tools were over 1 GB of disk space. I thought I'd use the configuration tools to save time reading the reference manual. The "wizard" generated code for reading the ADC turned out at 2.5 KB code space. On reading the manual, it turned out that all that was necessary for what I needed was to turn on one single bit in a peripheral register.

Still, I would hate to have to write the peripheral definition files by hand - there is a lot of use there, if you avoid the generated code.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.