Makefile or not?

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
What do you really use for embedded projects? Do you use "standard"  
makefile or do you rely on IDE functionalities?

Nowadays every MCU manufacturers give IDE, mostly for free, usually  
based on Eclipse (Atmel Studio and Microchip are probably the most  
important exception).
Anyway most of them use arm gcc as the compiler.

I usually try to compile the same project for the embedded target and  
the development machine, so I can speed up development and debugging. I  
usually use the native IDE from the manufacturer of the target and  
Code::Blocks (with mingw) for compilation on the development machine.
So I have two IDEs for a single project.

I'm thinking to finally move to Makefile, however I don't know if it is  
a good and modern choice. Do you use better alternatives?

My major reason to move from IDE compilation to Makefile is the test. I  
would start adding unit testing to my project. I understood a good  
solution is to link all the object files of the production code to a  
static library. In this way it will be very simple to replace production  
code with testing (mocking) code, simple prepending the testing oject  
files to static library of production code during linking.

I think these type of things can be managed with Makefile instead of IDE  
compilation.

What do you think?

Re: Makefile or not?
On 03/12/18 09:18, pozz wrote:
Quoted text here. Click to load it

I sometimes use the IDE project management to start with, or on very
small projects.  But for anything serious, I always use makefiles.  I
see it as important to separate the production build process from the
development - I need to know that I can always pull up the source code
for a project, do a "build", and get a bit-perfect binary image that is
exactly the same as last time.  This must work on different machines,
preferably different OS's, and it must work over time.  (My record is
rebuilding a project that was a touch over 20 years old, and getting the
same binary.)

This means that the makefile specifies exactly which build toolchain
(compiler, linker, library, etc.) are used - and that does not change
during a project's lifetime, without very good reason.

The IDE, and debugger, however, may change - there I will often use
newer versions with more features than the original version.  And
sometimes I might use a lighter editor for a small change, rather than
the full IDE.  So IDE version and build tools version are independent.

With well-designed makefiles, you can have different targets for
different purposes.  "make bin" for making the embedded binary, "make
pc" for making the PC version, "make tests" for running the test code on
the pc, and so on.


Quoted text here. Click to load it

I would not bother with that.  I would have different variations in the
build handled in different build tree directories.

Quoted text here. Click to load it

It can /all/ be managed from make.

Also, a well-composed makefile is more efficient than an IDE project
manager, IME.  When you use Eclipse to do a build, it goes through each
file to calculate the dependencies - so that you re-compile all the
files that might be affected by the last changes, but not more than
that.  But it does this dependency calculation anew each time.  With
make, you can arrange to generate dependency files using gcc, and these
dependency files get updated only when needed.  This can save
significant time in a build when you have a lot of files.



Re: Makefile or not?
Il 03/12/2018 11:06, David Brown ha scritto:
Quoted text here. Click to load it

Fortunately modern IDEs separate well the toolchain from the IDE itself.  
Most manufacturers let us install the toolchain as a separate setup. I  
remember some years ago the scenario was different and the compiler is  
"included" in the IDE installation.

However the problem here isn't the compiler (toolchain) that nowadays is  
usually arm-gcc. The big issue is with libraries and includes that the  
manufacturer give you to save some time in writing drivers of peripherals.
I have to install the full IDE and copy the interesting headers and  
libraries in my folders.

Another small issue is the linker script file that works like a charm in  
the IDE when you start a new project from the wizard.
At least for me, it's very difficult to write a linker script from the  
scratch. You need to have a deeper understanding of the C libraries  
(newlib, redlib, ...) to write a correct linker script.
My solution is to start with IDE wizard and copy the generated linker  
script in my make-based project.


Quoted text here. Click to load it

Could you explain?


Quoted text here. Click to load it

Yes, this is sure!



Re: Makefile or not?
On 03/12/18 12:13, pozz wrote:
Quoted text here. Click to load it

You can do that do some extent, yes - you can choose which toolchain to
use.  But your build process is still tied to the IDE - your choice of
directories, compiler flags, and so on is all handled by the IDE.  So
you still need the IDE to control the build, and different versions of
the IDE, or different IDEs, do not necessarily handle everything in the
same way.

Quoted text here. Click to load it

That's fine.  Copy the headers, libraries, SDK files, whatever, into
your project folder.  Then push everything to your version control
system.  Make the source code independent of the SDK, the IDE, and other
files - you have your toolchain (and you archive the zip/tarball of the
gnu-arm-embedded release) and your project folder, and that is all you
need for the build.

Quoted text here. Click to load it

Again, that's fine.  IDE's and their wizards are great for getting
started.  They are just not great for long-term stability of the tools.

Quoted text here. Click to load it

You have a tree something like this:

Source tree:

project / src / main
                drivers

Build trees:

project / build / target
                  debug
                  pctest

Each build tree might have subtrees :

project / build / target / obj / main
                                 drivers
project / build / target / deps / main
                                  drivers
project / build / target / lst / main
                                 drivers

And so on.

Your build trees are independent.  So there is no mix of object files
built in the "target" directory for your final target board, or the
"debug" directory for the version with debugging code enabled, or the
version in "pctest" for the code running on the PC, or whatever other
builds you have for your project.


Quoted text here. Click to load it

Of course, if build times are important, you drop Windows and use Linux,
and get a two to four-fold increase in build speed on similar hardware.
 And then you discover ccache on Linux and get another leap in speed.


Re: Makefile or not?
Il 03/12/2018 12:57, David Brown ha scritto:> On 03/12/18 12:13, pozz wrote:
 >> Il 03/12/2018 11:06, David Brown ha scritto:
 >>> On 03/12/18 09:18, pozz wrote:
 >>>> What do you really use for embedded projects? Do you use "standard"
 >>>> makefile or do you rely on IDE functionalities?
 >>>>
 >>>> Nowadays every MCU manufacturers give IDE, mostly for free, usually
 >>>> based on Eclipse (Atmel Studio and Microchip are probably the most
 >>>> important exception).
 >>>> Anyway most of them use arm gcc as the compiler.
 >>>>
 >>>> I usually try to compile the same project for the embedded target and
 >>>> the development machine, so I can speed up development and  
debugging. I
 >>>> usually use the native IDE from the manufacturer of the target and
 >>>> Code::Blocks (with mingw) for compilation on the development machine.
 >>>> So I have two IDEs for a single project.
 >>>>
 >>>> I'm thinking to finally move to Makefile, however I don't know if  
it is
 >>>> a good and modern choice. Do you use better alternatives?
 >>>>
 >>>
 >>> I sometimes use the IDE project management to start with, or on very
 >>> small projects.  But for anything serious, I always use makefiles.  I
 >>> see it as important to separate the production build process from the
 >>> development - I need to know that I can always pull up the source code
 >>> for a project, do a "build", and get a bit-perfect binary image that is
 >>> exactly the same as last time.  This must work on different machines,
 >>> preferably different OS's, and it must work over time.  (My record is
 >>> rebuilding a project that was a touch over 20 years old, and  
getting the
 >>> same binary.)
 >>>
 >>> This means that the makefile specifies exactly which build toolchain
 >>> (compiler, linker, library, etc.) are used - and that does not change
 >>> during a project's lifetime, without very good reason.
 >>>
 >>> The IDE, and debugger, however, may change - there I will often use
 >>> newer versions with more features than the original version.  And
 >>> sometimes I might use a lighter editor for a small change, rather than
 >>> the full IDE.  So IDE version and build tools version are independent.
 >>>
 >>> With well-designed makefiles, you can have different targets for
 >>> different purposes.  "make bin" for making the embedded binary, "make
 >>> pc" for making the PC version, "make tests" for running the test  
code on
 >>> the pc, and so on.
 >>
 >> Fortunately modern IDEs separate well the toolchain from the IDE itself.
 >> Most manufacturers let us install the toolchain as a separate setup. I
 >> remember some years ago the scenario was different and the compiler is
 >> "included" in the IDE installation.
 >>
 >
 > You can do that do some extent, yes - you can choose which toolchain to
 > use.  But your build process is still tied to the IDE - your choice of
 > directories, compiler flags, and so on is all handled by the IDE.  So
 > you still need the IDE to control the build, and different versions of
 > the IDE, or different IDEs, do not necessarily handle everything in the
 > same way.
 >
 >> However the problem here isn't the compiler (toolchain) that nowadays is
 >> usually arm-gcc. The big issue is with libraries and includes that the
 >> manufacturer give you to save some time in writing drivers of  
peripherals.
 >> I have to install the full IDE and copy the interesting headers and
 >> libraries in my folders.
 >
 > That's fine.  Copy the headers, libraries, SDK files, whatever, into
 > your project folder.  Then push everything to your version control
 > system.  Make the source code independent of the SDK, the IDE, and other
 > files - you have your toolchain (and you archive the zip/tarball of the
 > gnu-arm-embedded release) and your project folder, and that is all you
 > need for the build.
 >
 >>
 >> Another small issue is the linker script file that works like a charm in
 >> the IDE when you start a new project from the wizard.
 >> At least for me, it's very difficult to write a linker script from the
 >> scratch. You need to have a deeper understanding of the C libraries
 >> (newlib, redlib, ...) to write a correct linker script.
 >> My solution is to start with IDE wizard and copy the generated linker
 >> script in my make-based project.
 >>
 >
 > Again, that's fine.  IDE's and their wizards are great for getting
 > started.  They are just not great for long-term stability of the tools.
 >
 >>
 >>>> My major reason to move from IDE compilation to Makefile is the  
test. I
 >>>> would start adding unit testing to my project. I understood a good
 >>>> solution is to link all the object files of the production code to a
 >>>> static library. In this way it will be very simple to replace  
production
 >>>> code with testing (mocking) code, simple prepending the testing oject
 >>>> files to static library of production code during linking.
 >>>>
 >>>
 >>> I would not bother with that.  I would have different variations in the
 >>> build handled in different build tree directories.
 >>
 >> Could you explain?
 >>
 >
 > You have a tree something like this:
 >
 > Source tree:
 >
 > project / src / main
 >                  drivers
 >
 > Build trees:
 >
 > project / build / target
 >                    debug
 >                    pctest
 >
 > Each build tree might have subtrees :
 >
 > project / build / target / obj / main
 >                                   drivers
 > project / build / target / deps / main
 >                                    drivers
 > project / build / target / lst / main
 >                                   drivers
 >
 > And so on.
 >
 > Your build trees are independent.  So there is no mix of object files
 > built in the "target" directory for your final target board, or the
 > "debug" directory for the version with debugging code enabled, or the
 > version in "pctest" for the code running on the PC, or whatever other
 > builds you have for your project.
Ok, I got your point and I usually arrange everything similar to your  
description (even if I put .o, .d and .lst in the same target-dependent  
directory).  I also have to admit that all major IDEs nowadays arrange  
output files in this manner.

Anyway testing is difficult, at least for me.

Suppose you have a simple project with three source files: main.c,  
modh.c and modl.c (of course you have modh.h and modl.h).

Now you want to create a unit testing for modh module that depends on  
modl. During test modl should be replaced with a dummy module, a mocking  
object. What is your approach?

In project/tests I create a test_modl.c source file that should be  
linked against modh.o (the original production code) and  
project/tests/modl.o, the mocking object for modl.

One approach could be to re-compile modh.c again during test  
compilation. However it's difficult to replace main modl.h with modl.h  
from mocking object in the test directory.
modh.c will have a simple

   #include "modl.h"

directive and this will point to modl.h in the *same* directory. I  
couldn't be able to instruct the compiler to use modl.h from tests  
directory.

Moreover it could be useful to test the same object generated during  
production. I found a good approach. The production code is compiled all  
in a static library, libproduct.a. The tests are compiled against static  
library.
The following command, run in the project/tests/ folder

   gcc test_modh.o modl.o libproduct.a -o test_modh.exe

should generate a test_modh.exe with mocking object for modl and the  
*same* modh object code of production.


Re: Makefile or not?

Quoted text here. Click to load it

It impossible to overemphasize how important that is.  Somebody should
be able to check out the source tree and a few tools and then type a
single command to build production firmware.  And you need to be able
to _automate_ that process.

If building depends on an IDE, then there's always an intermediate
step where a person has to sit in front of a PC for a week tweaking
project settings to get the damn thing to build on _this_ computer
rather than on _that_ computer.

Quoted text here. Click to load it

And in my experience, IDEs do not.  The people I know who use Eclips
with some custom-set-of-plugins spend days and days when they need to
build on computer B insted of computer A.  I just scp "build.sh" to
the new machine and run it. It contains a handful of Subversion
checkout commands and a "make".  And I can do it remotely.  From my
phone if needed.

Quoted text here. Click to load it

Yes!  Simply upgrading the OS often seems to render an IDE incapable
of building a project: another week of engineering time goes down the
drain tweaking the "project settings" to get things "just right".

--  
Grant Edwards               grant.b.edwards        Yow! JAPAN is a WONDERFUL
                                  at               planet -- I wonder if we'll
We've slightly trimmed the long signature. Click to see the full one.
Re: Makefile or not?
Quoted text here. Click to load it

One approach is to put the tools into a VM or a container (eg Docker), so
that when you want to build you pull the container and you get an identical
build environment to the last time anyone built it.
Also, your continuous integration system can run builds and tests in
the same environment as you're developing on.

Unfortunately vendors have a habit of shipping IDEs for Windows only, which
makes this harder.  It's not so much of a problem for the actual
compiler - especially if that's GCC under the hood - but ancillary tools (eg
configuration tools for peripherals, flash image builders, etc), which are
sometimes not designed to be scripted.

(AutoIt is my worst enemy here, but it has been the only way to get the job
done in some cases)

Decoupling your build from the vagaries of the IDE, even if you can trust
that you'll always build on a fixed platform, is still a good thing - many
IDEs still don't play nicely with version control, for example.

Theo

Re: Makefile or not?
On Monday, December 3, 2018 at 10:49:36 AM UTC-5, Theo Markettos wrote:
Quoted text here. Click to load it
al

Second that!

We to development in and deliver VMs to customers now, so they are CERTAIN  
to receive exactly the 'used for production build' versions of every tool,  
library, driver required for JTAG gizmo, referenced component, etc, etc, et
c. Especially important when some tools won't work under latest version of  
Winbloze! Saves enormous headaches sometime down the road when an update mu
st be made...

Hope that helps,
Best Regards, Dave

Re: Makefile or not?
On 03/12/2018 16:49, Theo Markettos wrote:
Quoted text here. Click to load it

That is possible, but often more than necessary.  Set up your build  
sensibly, and it only depends on the one tree for the toolchain, and  
your source code tree.  It should not depend on things like the versions  
of utility programs (make, sed, touch, etc.), environment variables, and  
that kind of thing.

Sometimes, however, you can't avoid that - especially for Windows-based  
toolchains that store stuff in the registry and other odd places.

Quoted text here. Click to load it

That is thankfully rare these days.  There are exceptions, but most  
major vendors know that is a poor habit.

Quoted text here. Click to load it

Yes, these are more likely to be an issue.  Generally they are not  
needed for rebuilding the software - once you have run the wizards and  
similar tools, the job is done and the generated source can be  
preserved.  But it can be an issue if you need to re-use the tools for  
dealing with changes to the setup.

Quoted text here. Click to load it

Often IDE's have good integration with version control for the source  
files, but can be poor for the project settings and other IDE files.  
Typically that sort of thing is held in hideous XML files with  
thoughtless line breaks, making it very difficult to do comparisons and  
change management.

Quoted text here. Click to load it


Re: Makefile or not?

Quoted text here. Click to load it

Gnu makefiles.


And they're almost all timewasting piles of...

Quoted text here. Click to load it

If you're going to use an IDE, it seems like you should pick one and
stick with it so that you get _good_ at it.

I use Emacs, makefiles, and meld.

Quoted text here. Click to load it

How awful.



I've tried IDEs.  I've worked with others who use IDEs and watched
them work, and compared it to how I work.  It looks to me like IDEs
are a tremendous waste of time.

--  
Grant Edwards               grant.b.edwards        Yow! ... this must be what
                                  at               it's like to be a COLLEGE
We've slightly trimmed the long signature. Click to see the full one.
Re: Makefile or not?

Quoted text here. Click to load it

+1 on those.  My memory isn't good enough any more to remember all the
byzantine steps through an IDE to re-complete all the tasks my projects
require.

Especially since each MCU seems to have a *different* IDE with
*different* procedures to forget...

And that's assuming they run on Linux in the first place ;-)

Re: Makefile or not?
Quoted text here. Click to load it

The most important rule to remember is:

Never, ever, use any software written or provided by the silicon
vendor.  Everytime I've failed to obey that rule, I've regretted it.

I've heard rumors that Intel at one time wrote a pretty good C
compiler for x86.

However, having used other development software from Intel, I find
that impossible to believe.  [Acually, Intel MDS-800 "blue boxes"
weren't bad as long as you ran CP/M on them insteaod of, ISIS.]

And don't get me started on compilers and tools from TI, Motorola, or
various others either...

Some of them have put some effort into getting good Gnu GCC and
binutils support for their processors, and that seems to produce good
results.  If only they had realized that's all they really needed to
do in the _first_ place...

--  
Grant Edwards               grant.b.edwards        Yow! Can you MAIL a BEAN
                                  at               CAKE?
We've slightly trimmed the long signature. Click to see the full one.
Re: Makefile or not?
On 4/12/18 6:36 am, Grant Edwards wrote:
Quoted text here. Click to load it

[Difficult to apply that rule for an FPGA (except some Lattice parts).]

Also, ARM seems to require that its licensee support CMSIS. This truly  
excellent idea seems to be terribly poorly thought-out and implemented.  
You get header files that pollute your program namespace with hundreds  
or thousands of symbols and macros with unintelligible names, many of  
which are manufacturer-specific not even CMSIS-related.

I know there's opencm3 which seems to be better, but still...

Standard APIs like CMSIS need *very* disciplined design and rigorous  
management to minimise namespace pollution. Unfortunately we don't seem  
to be there, yet, unless I've missed something major.

How do people handle this?

Clifford Heath.

Re: Makefile or not?
Quoted text here. Click to load it

True


You're putting that mildly.  I recently development some firmware for
an NXP KL03 (Cortex-M0) part.  It's a tiny part with something like
8KB of flash and a coule hundred bytes of RAM.  Of course NXP provides
IDE based "sample apps" that take up a gigabyte of disk space and
includes CMSIS (which itself is hundreds (if not thousands) of files
which define APIs for all of the peripherals that comprise layer upon
layer of macros calling macros calling functions calling functions
full of other macros calling macros.  Trying to build even an empty
main() using the CMSIS libraries resulted in executable images several
times larger than available flash.

I finally gave up and tossed out everything except a couple of the
lowest level include files that defined register addresses for the
peripherals I cared about. Then I wrote my own functions to access
peripherals and a Makefile to build the app.

In the end, I cursed myself for forgetting the rule of "no silicon
vendor software".  It would have been faster to start with nothing and
begin by typing register addresses from the user manual into a .h
file.

Quoted text here. Click to load it

Yep, CMSIS is spectacularly, mind-numingly awful.

Quoted text here. Click to load it

Lots of teeth-gritting and quiet swearing.

--  
Grant Edwards               grant.b.edwards        Yow! Mr and Mrs PED, can I
                                  at               borrow 26.7% of the RAYON
We've slightly trimmed the long signature. Click to see the full one.
Re: Makefile or not?
mar?i, 4 decembrie 2018, 01:19:30 UTC+2, Grant Edwards a scris:
Quoted text here. Click to load it
  
Quoted text here. Click to load it
  
Quoted text here. Click to load it
an I
YON
the
?

About CMSIS, it is wonderfull if you use only the absolutely neccessary fil
es.
I always extract from the gigabyte only the core_xxx.h files,  
and the single header file with the register definitions for the microcontr
oller.
For example:
core_cm0.h
core_cmInstr.h
core_cmFunc.h
stm32f091xc.h
That's simply the CMSIS for the STM32F091 chip in use.
In fact, the core_xxx files are already the same for an architecture (cm0,  
cm3 etc.).
You only need the .h file for your chip registers.

Re: Makefile or not?
On 04/12/18 00:19, Grant Edwards wrote:
Quoted text here. Click to load it

There is a balance here - you can keep the good parts, and drop the bad
parts.  But sometimes it takes effort, and sometimes keeping a few bad
parts is more practical.

Manufacturer-provided headers for declaring peripherals are usually very
convenient and save a lot of work.  The same applies to the CMSIS
headers for Cortex internal peripherals, assembly function wrappers, etc.

On the other hand, the "wizard" and "SDK" generated code is often
appalling, with severe lasagne programming (a dozen layers of function
calls and abstractions for something that is just setting a peripheral
hardware register value).

I also find startup code and libraries can be terrible - they are often
written in assembly simply because they have /always/ been written in
assembly, and often bear the scars of having been translated from the
original 6805 assembly code (or whatever) through 68k, PPC, ARM, etc.,
probably by students on summer jobs.

I can relate to your "SDK uses more code than the chip".  I had occasion
to use a very small Freescale 8-bit device a good number of years ago.
The device had 2K or so of flash.  The development tools were over 1 GB
of disk space.  I thought I'd use the configuration tools to save time
reading the reference manual.  The "wizard" generated code for reading
the ADC turned out at 2.5 KB code space.  On reading the manual, it
turned out that all that was necessary for what I needed was to turn on
one single bit in a peripheral register.


Still, I would hate to have to write the peripheral definition files by
hand - there is a lot of use there, if you avoid the generated code.





Re: Makefile or not?

Quoted text here. Click to load it

I definitely second the "students on summer jobs" opinion.  Over the
years I've seen a lot of sample/library code from silicon vendors and
most of it was truly awful.  It was often clearly written by somebody
who didn't have a working knowledge of either the hardware or the
language they were using.  Sometimes it just plain didn't work, but
since the authors obviously didn't understand what the hardware was
actually supposed to do, they had no way of knowing that.

In my experience, trying to use anything from silicon vendors beyond
the header files with register addresses/structures has always been a
complete waste of time.

--  
Grant Edwards               grant.b.edwards        Yow!
                                  at               BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-
We've slightly trimmed the long signature. Click to see the full one.
Re: Makefile or not?
Quoted text here. Click to load it

That's been our experience too.  We reported a bug (with included fix) in a
particular vendor's module, and their response was not to fix the bug but to
delete the module from their portfolio.  Then a few years later the module
reappeared - with the bug still present.

Theo

Re: Makefile or not?
On 12/3/18 2:30 PM, Clifford Heath wrote:

Quoted text here. Click to load it

Bourbon in general, though I have it on authority that a nice rum  
daiquiri is also quite effective.


--  
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.

Re: Makefile or not?
On 12/3/18 2:36 PM, Grant Edwards wrote:
Quoted text here. Click to load it

I've used it, circa 2006-7, and for my application (highly multithreaded  
3D electromagnetic simulation on a SMP) it was amazing--it blew the  
doors off both Visual C++ and gcc under cygwin.  (For sufficiently  
permissive values of 'amazing', that is, i.e. 1.5-1.8x on the same  
hardware.) ;)

Quoted text here. Click to load it

In defence of Eclipse, it does do a much better job of humanizing gdb  
than the other things I've used, such as ddd.

Cheers

Phil Hobbs

--  
Dr Philip C D Hobbs
Principal Consultant
We've slightly trimmed the long signature. Click to see the full one.

Site Timeline