Makefile or IDE?

When I download C source code (for example for Linux), most of the time I need to use make (or autoconf).

In embedded world (no Linux embedded), we use MCUs produced by a silicon vendor that give you at least a ready-to-use IDE (Elipse based or Visual Studio based or proprietary). Recently it give you a full set of libraries, middlewares, tools to create a complex project from scratch in a couple of minutes that is compatibile and buildable with its IDE.

Ok, it's a good thing to start with a minimal effort and make some tests on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).

I'm asking this, because I just started to add some unit tests (to run on the host machine) on one of my projects that is built under the IDE. Without a Makefile is very difficult to add a series of tests: do I create a different IDE project for each module test?

Moreover, the build process of a project maintained under an IDE is manual (click on a button). Most of the time there isn't the possibility to build by a command line and when it is possible, it isn't the "normal" way.

Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.

Do you use IDE or Makefile? Is there a recent and much better alternative to make (such as cmake or SCons)?

Reply to
pozz
Loading thread data ...

We always use makefiles. Some people do their editing and "make"ing in an IDE like eclipse. Others use emacs or whatever other environment they like.

In my experience, software provided by silicon vendors has always, been utter crap. That's been true for IDEs, libraries, header files, debuggers -- _everything_. And it's been true for 40 years.

Recently I tried to use the Silicon vendor's IDE and demo project/libraries to build the simple app that prints "hello world" on a serial port. This is an application, IDE, and libraries the silicon vendor provided _with_the_evaluation_board_.

Following the instructions, step-by-step, did allow me to build an executable. It was far too large for the MCU's flash. I threw out the silicon vendor's "drivers" (which were absurdly bloated) and C library (also huge). I wrote my own bare-metal drivers and substituted the printf() implementation I had been using for years. The exectuable size was reduced by over 75%.

We've also tried to use non-silicon-vendor IDEs (eclipse), and using the IDE's concept of "projects" is always a complete mess. The "project" always ends up with lot's of hard-coded paths and host-specific junk in it. This means you can't check the project into git/subversion, check it out on another machine, and build it without days of "fixing" the project to work on the new host.

--
Grant
Reply to
Grant Edwards

Thank you for sharing your experiences. Anyway my post wasn't related to the quality (size/speed efficiency...) of source code provided by silicon vendors, but to the build process: IDE vs Makefile.

Reply to
pozz

Am 02.12.2021 um 12:46 schrieb pozz:

So far, all the IDEs I have encountered this century use some variation of make under the hood, and have a somewhat standard compiler (i.e. responds to `whatevercc -c file.c -o file.o`).

Think of make (or ninja) as some sort of (macro-) assembler language of build systems, and add a high-level language on top.

CMake seems to be a popular (the most popular?) choice for that language on top, although reasons why it sucks are abundant; the most prominent for me being that the Makefiles it generates violate pretty much every best practice and therefore are slow. Other than that, it can build embedded software of course.

You'll eventually need another meta-build system on top to build the projects that form your system image (busybox? openssl? dropbear? linux?), you'll not port their build systems into yours.

Stefan

Reply to
Stefan Reuther

Il 02/12/2021 17:34, Stefan Reuther ha scritto:

It's very difficult to choose the build system today to study and use.

make, Cmake/make, Cmake/ninja, Meson, Scons, ...

What do you suggest for embedded projects? Of course I use cross-compiler for the target (mainly arm-gcc), but also host native compiler (mingw on Windows and gcc on Linux) for testing and simulation.

Reply to
pozz

They are not complete opposites For example, the Eclipse CDT uses make at the tool to perform the build. There is a difference whether the user wirites the makefiles or the IDE creates them. Most IDEs create makefiles for running the generated code on the same computer which houses the IDE, and it is more difficult to cross-compile for embedded targets.

I agree on the silicon manufacturers' code, it should be jettisoned.

I have abandoned the code from both Atmel and ST after fighting some weeks to make them perform. Instead, the manufacturers should concentrate on documenting the hardware properly. I had to dis-assemble Atmel's start-up code to get the fact that SAM4 processor clock controls must be changed only one field at a time, even if the fields occupy the same register. If multiple fields were changed, the clock set-up did never get ready. This is a serious problem, as ARM breaks the JTAG standard and requires the processor clock running to respond to JTAG. The JTAG standard assumes that the only clocking needed comes from the JTAG clock.

--

-TV
Reply to
Tauno Voipio

Am 03.12.2021 um 11:49 schrieb pozz:

Same here.

At work, we use cmake/make for building (but if you have cmake, it doesn't matter whether there's make or ninja below it). That's pretty ok'ish for turning a bunch of source code files into an executable; probably not so good for doing something else (e.g. rendering images for documentation and your device's UI).

Personally, I generate my Makefiles (or build.ninja files) with a homegrown script; again, based on the assumption that make is an assembler that needs a high-level language on top.

However, building your code isn't the whole story. Unless you have a huge monorepo containing everything you ever did, you'll have to check out different things, and you will have dependencies between projects, some even conditional (maybe you don't want to build your unit test infrastructure when you make a release build for your target? maybe you want a different kernel version when building an image for a v2 board vs. a v1 board?).

I use a tool called 'bob' as the meta-build system for that, at work and personally. It started out as an in-house tool so it surely isn't industry standard, needs some planning, and then gets the job done nicely. It invokes the original build process of the original subprojects, be it cmake-based or autotools-based. The people who build (desktop or embedded) Linux distributions all have some meta-build system to do things like that, and I would assume neither of them is easy to set up, just because the problem domain is pretty complex.

Stefan

Reply to
Stefan Reuther

ISTM that IDEs start off as cover for the woeful state of the command line environment on Windows.

On Unix, when you want to target a different platform, all you need is a new compiler. Just grab arm-unknown-gnueabi-gcc and you're done. Maybe you need some libraries as well, but that's easy. Debugging tools are all there

- based on gdb or one of numerous frontends. Then you use the environment you already have - your own editor, shell, scripting language, version control, etc are all there.

On Windows[*], few people develop like that because cmd.exe is an awful shell to work in, all this C:\Program Files\blah tends to get in the way of Unixy build tools like make, and command line editors etc aren't very good. Windows also makes it awkward to mix and match GUI tools (eg separate editor, compiler, debugger GUI apps).

So instead people expect an IDE with its own editor, that does everything in house and lives in a single maximised window, and orchestrates the build pipeline.

But then it starts bloating - the debugger gets brought in, then the programmer, then sometimes it starts growing its own idea of a version control client. And eventually you end up with something extremely complicated and somewhat flaky just to build a few kilobytes of code.

Not to say that there aren't some useful features of IDEs - one thing is explicit library integration into the editor (so you get documentation and expansion as you type), another is special dialogues for configuration options in your particular chip (eg pin mapping or initial clock setup) rather than expecting you to configure all these things from code. The first is something that existing editors can do given sufficient information of the API, and the second is generally something you only do once per project.

But for the basic edit-build-run-test cycle, the GUI seems mostly to get in the way.

Theo

[*] Powershell and WSL have been trying to improve this. But I've not seen any build flows that make much use of them, beyond simply taking Linux flows and running them in WSL.
Reply to
Theo

I always had good luck using Cygwin and gnu "make" on Windows to run various Win32 .exe command line compilers (e.g. IAR). I (thankfully) haven't needed to do that for several years now...

--
Grant
Reply to
Grant Edwards

Il 02/12/2021 12:46, pozz ha scritto:

It's absurd how difficult is to create a Makefile for a simple project with the following tree:

Makefile src/ file1.c module1/ file2.c module2/ file3.c target1/ Release/ src/ file1.o file1.d module1/ file2.o file2.d module2/ file3.o file3.d Debug/ src/ file1.o file1.d ...

Just to create directories for output files (objects and dependencies) is a mess: precious rule, cheating make adding a dot after trailing slash, second expansion, order-only prerequisite!!!

Dependencies must be created as a side effect of compilation with esoteric -M options for gcc.

Is cmake simpler to configure?

Reply to
pozz

Am 03.12.2021 um 23:48 schrieb pozz:

[...]

Hypothesis: a single Makefile that does all this is not a good idea. Better: make a single Makefile that turns your source code into one instance of object code, and give it some configuration options that say whether you want target1/Release, target2/Release, or host/Debug.

I'm not sure what you need order-only dependencies for. For a project like this, with GNU make I'd most likely just do something like

OBJ = file1.o module1/file2.o module2/file3.o main: $(OBJ) $(CC) -o $@ $(OBJ) $(OBJ): %.o: $(SRCDIR)/%.c mkdir $(dir $@) $(CC) $(CFLAGS) -c $< -o $@

It's not too bad with sufficiently current versions.

CFLAGS += -MMD -MP -include $(OBJ:.o=.d)

CMake does one-configuration-per-invocation type builds like sketched above, i.e. to build target1/Release and target1/Debug, you invoke CMake on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and once with -DCMAKE_BUILD_TYPE=Debug.

Stefan

Reply to
Stefan Reuther

Il 04/12/2021 10:31, Stefan Reuther ha scritto:

Oh yes, I'm using three make variables:

CONF=rls|dbg TARGET=target1|target2

I also have

MODEL=model1|model2

because the same source code can be compiler to produce firmware for two types of products.

Anyway, even using these three variables, the Makefile is difficult to write and understand (at least for me).

This is suboptimal. Every time one object file is created (because it is not present or because prerequisites aren't satisfied), mkdir command is executed, even if $(dir $@) is already created.

A better approach is to use a dedicated rule for directories, but it's very complex and tricky[1].

I think your approach is better, only because is much more understandable, not because is more efficient.

Are you sure you don't need -MT too, to specify exactly the target rule?

Yes, I was asking if the configuration file of CMake is simpler to write compared to a Makefile.

[1]
formatting link
Reply to
pozz

I almost never use makefiles that have the object files (or source files, or other files) specified explicitly.

CFILES := $(foreach dir,$(ALLSOURCEDIRS),$(wildcard $(dir)/*.c)) CXXFILES := $(foreach dir,$(ALLSOURCEDIRS),$(wildcard $(dir)/*.cpp))

OBJSsrc := $(CFILES:.c=.o) $(CXXFILES:.cpp=.o) OBJS := $(addprefix $(OBJDIR), $(patsubst ../%,%,$(OBJSsrc)))

If there is a C or C++ file in the source tree, it is part of the project. Combined with automatic dependency resolution (for which I use gcc with -M* flags) this means that the make for a project adapts automatically whenever you add new source or header files, or change the ones that are there.

Use existence-only dependencies:

target/%.o : %.c | target $(CC) $(CFLAGS) -c $< -o $@

target : mkdir -p target

When you have a dependency given after a |, gnu make will ensure that it exists but does not care about its timestamp. So here it will check if the target directory is there before creating target/%.o, and if not it will make it. It probably doesn't matter much for directories, but it can be useful in some cases to avoid extra work.

And use "mkdir -p" to make a directory including any other parts of the path needed, and to avoid an error if the directory already exists.

The reference you gave is okay too. Some aspects of advanced makefiles /are/ complex and tricky, and can be hard to debug (look out for mixes of spaces instead of tabs at the start of lines!) But once you've got them in place, you can re-use them in other projects. And you can copy examples like the reference you gave, rather than figuring it out yourself.

My version is - IMHO - understandable /and/ efficient.

The exact choice of -M flags depends on details of your setup. I prefer to have the dependency creation done as a separate step from the compilation - it's not strictly necessary, but I have found it neater. However, I use two -MT flags per dependency file. One makes a rule for the file.o dependency, the other is for the file.d dependency. That way, make knows when it has to re-build the dependency file.

I've only briefly looked at CMake. It always looked a bit limited to me

- sometimes I have a variety of extra programs or steps to run (like a Python script to pre-process files and generate extra C or header files, or extra post-processing steps). I also often need different compiler flags for different parts of a project. Perhaps it would work for what I need and I just haven't read enough.

Reply to
David Brown

CMake is on a different level than make. CMake aims to the realm of autoconf, automake and friends. One of the supported tail-ends for CMake is GNU make.

--

-TV
Reply to
Tauno Voipio

CMake is popular because it is cross-platformm: with a bit of care, its makefiles (conditionally) can run on any platform that supports CMake.

Cross-platform can be a dealbreaker in the desktop/server world.

YMMV, George

Reply to
George Neuner

The problem with Cygwin is it doesn't play well with native Windows GCC (MingW et al).

Cygwin compilers produce executables that depend on the /enormous/ Cygwin library. You can statically link the library or ship the DLL (or an installer that downloads it) with your program, but by doing so your programs falls under GPL - the terms of which are not acceptable to some developers.

And the Cygwin environment is ... less than stable. Any update to Windows can break it.

YMMV, George

Reply to
George Neuner

I concur with that. Cygwin made sense long ago, but for the past couple of decades the mingw-based alternatives have been more appropriate for most uses of *nix stuff on Windows. In particular, Cygwin is a thick compatibility layer that has its own filesystem, process management, and other features to fill in the gaps where Windows doesn't fulfil the POSIX standards (or does so in a way that plays badly with the rest of Windows). Very often, the changes needed in open-source or

*nix-heritage software to make them more Windows-friendly are small. A common example is changing old-style "fork + exec" paths to "spawn" calls which are more modern and more efficient (even on *nix). With such small changes, programs can be compiled on thin compatibility layers like mingw instead, with the result being a lot faster, smoother, better integrated with Windows, and without that special tier of DLL-hell reserved for cygwin1.dll and its friends.

So the earliest gcc versions I built and used on Windows, gcc 2.95 for the m68k IIRC, had to be built with Cygwin. By the time I was using gcc

4+, perhaps earlier, it was all mingw-based and I have rarely looked at Cygwin since.

I strongly recommend msys2, with mingw-64, as the way to handle *nix programs on Windows. You can install and use as much as you want - it's fine to take most simple programs and use them independently on other Windows systems with no or a minimum of dll's. (If the program relies on many external files, it will need the msys2 file tree.) You can use msys2 bash if you like, or not if you don't like. The mingw-64 gcc has a modern, complete and efficient C library instead of the horrendous MSVCCRT dll. Most Windows ports of *nix software is made with either older mingw or newer mingw-64.

I can understand using Cygwin simply because you've always used Cygwin, or if you really need fuller POSIX compatibility. But these days, WSL is probably a better option if that's what you need.

Reply to
David Brown

[...]

The problem is that both projects, Cygwin and MinGW/MSYS, provide much more than just a compiler, and in an incompatible way, which is probably incompatible with what your toolchain does, and incompatible with what Visual Studio does.

"-Ic:\test" specifies one path name for Windows, but probably two for a toolchain with Unix heritage, where ":" is the separator, not a drive letter. Cygwin wants "-I/cygdrive/c" instead, (some versions of) MinGW want "-I/c". That, on the other hand, might be an option "-I" followed by an option "/c" for a toolchain with Windows heritage.

The problem domain is complex, therefore solutions need to be complex.

That aside, I found staying within one universe ("use all from Cygwin", "use all from MinGW") to work pretty well; when having to call into another universe (e.g. native Win32), be careful to not use, for example, any absolute paths.

Stefan

Reply to
Stefan Reuther

Am 04.12.2021 um 16:23 schrieb pozz:

(did I really forget the '-p'?)

The idea was that creating a directory and checking for its existence both require a path lookup, which is the expensive operation here.

When generating the Makefile with a script, it's easy to sneak a 100% matching directory creation dependency into any rule that needs it

foo/bar.o: bar.c foo/.mark ... foo/.mark: mkdir foo

Documentation says you are right, but '-MMD -MP' works fine for me so far...

Stefan

Reply to
Stefan Reuther

Neither Cygwin nor msys are compilers or toolchains. Nor is MSVS, for that matter. That would only be a problem if you misunderstood what they are.

Drive letters on Windows have always been a PITA. Usually, IME, it is not a big issue for compilation - most of your include directories will be on the same drive you are working in (with "system" includes already handled by the compiler configuration). Use a makefile, make the base part a variable, then at most you only have to change one part. It's a good idea anyway to have things like base paths to includes as a variable in the makefile.

One of the differences between msys and cygwin is the way they handle paths - msys has a method that is simpler and closer to Windows, while cygwin is a bit more "alien" but supports more POSIX features (like links). In practice, with programs compiled for the "mingw" targets you can usually use Windows names and paths without further ado. On my Windows systems, I put msys2's "/usr/bin" directory on my normal PATH, and from the standard Windows command prompt I happily use make, grep, less, cp, ssh, and other *nix tools without problems or special consideration.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.