Makefile or IDE?

Checking for the path is not expensive - it is already necessary to have the path details read from the filesystem (and therefore cached, even on Windows) because you want to put a file in it. So it is free. "mkdir

-p" is also very cheap - it only needs to do something if the path does not exist. (Of course, on Windows starting any process takes time and resources an order of magnitude or more greater than on *nix.) It is always nice to avoid unnecessary effort, as even small inefficiencies add up if there are enough of them. But there's no need to worry unduly about the small things.

And it's easy to forget the "touch foo/.mark" command to make that work!

But that is completely unnecessary - make is perfectly capable of working with a directory as a dependency and target (especially as an order-only dependency).

Reply to
David Brown
Loading thread data ...

I use CMake for cross-compilation for microcontroller stuff. I don't use it for my own code, but there are a few 3rd-party libraries that use it, and I don't have any problems configuring it to use a cross compiler.

--
Grant
Reply to
Grant Edwards

It's always worked fine for me.

I wasn't talking about using Cygwin compilers. I was talking about using Cygwin to do cross-compilation using compilers like IAR.

That's definitely true. :/

Reply to
Grant Edwards

OK. As I said, I haven't looked in detail or tried much. Maybe I will, one day when I have time.

Reply to
David Brown

I see no reason at all to use for embedded code unless you want to use a large 3rd party library that already uses it, and you want to use that library's existing cmake build process. For smaller libraries, it's probably easier to write a makefile from scratch.

IMO, configuring stuff that uses cmake seems very obtuse and fragile ? but that's probably because I don't use it much.

--
Grant
Reply to
Grant Edwards

For my most recent projects, I'm using Eclipse and letting it generate the make files. I find it necessary to manually clean up the XML controlling Eclipse to ensure that there are no hard-coded paths and everything uses sane path variables (after starting with vendor-tool-generated project).

I have multiple projects in the workspace:

1) target project uses the GCC ARM cross-compiler (debug and release targets), and 2) for host builds of debug+test software, one or more minGW GCC projects

Its not optimal but it does work without too much grief. Does a poor job for understanding/maintaining those places where different compiler options are needed (fortunately not many).

I read *the* book on CMake and my head hurts, plus the book is revised every three weeks as CMake adds or fixes numerous 'special' things. Haven't actually used it yet but might try (with Zephyr).

My older big projects are primarily make (with dozens of targets including intermediate preprocess stuff), plus separate Visual Studio build for a simulator, an Eclipse build (auto-generated make) for one of the embedded components, and Linux Eclipse build (auto-generated make) for Linux versions of utilities. All this is painful to maintain and keep synchronized. Sure would like to see a better way to handle all the different targets and platforms (which CMake should help with but I'm really not sure how to wrangle the thing).

Interesting discussion!

Reply to
Dave Nadler

Il 02/12/2021 12:46, pozz ha scritto: [...]

I reply to this post for a few questions on make.

For embedded projects, we use at least one cross-compiler. I usually use two compilers: cross-compiler for the embedded target and native-compiler for creating a "simulator" for the host or for running some tests on the host.

I'm thinking to use an env variable to choose between the targets:

make TARGET=embedded|host

If the native compiler is usually already on the PATH (even if I prefer to avoid), cross-compiler is usually not already in the PATH. How do you solve this?

I'm thinking to use again env variables and set them in a batch script path.bat that is machine dependent (so it shouldn't be tracked by git).

SET GNU_ARM=c:\nxp\MCUXpressoIDE_11.2.1_4149... SET MINGW_PATH=c:\mingw64

In the makefile:

ifeq($(TARGET),embedded) CC := "$(GNU_ARM)/bin/arm-none-eabi-gcc" CPP... else ifeq($(TARGET),host) CC := $(MINGW_PATH)/bin/gcc CPP... endif

In this way, I launch path.bat only one time on my windows development machine and run make TARGET= during development.

Another issue is with internal commands of cmd.exe. GNU make for Windows, ARM gcc, mingw and so on are able to manage paths with Unix-like slash, but Windows internal commands such as mkdir, del does not. I think it's much better to use Unix like commands (mkdir, rm) that can be installed with coreutils[1] for Windows.

So in path.bat I add the coreutils folder to PATH:

SET COREUTILS_PATH=C:\TOOLS\COREUTILS

and in Makefile:

MKDIR := $(COREUTILS_PATH)/bin/mkdir RM := $(COREUTILS_PATH)/bin/rm

Do you use better solutions?

[1]
formatting link
Reply to
pozz

Il 04/12/2021 17:41, David Brown ha scritto: [...]

Do you replicate the source tree in the target directory for build?

I'd prefer to have the same tree in source and build dirs:

src/ file1.c mod1/ file1.c mod2/ file1.c build/ file1.o mod1/ file1.o mod2/ file1.o

With your rules above, I don't think this can be done. target is only the main build directory, but I need to create subdirectories too.

I understood for this I need to use $(@D) in prerequisites and this can be done only with second expansion.

.SECONDEXPANSION:

target/%.o : %.c target/%.d | $$(@D) $(CC) $(CFLAGS) -c $< -o $@

$(BUILD_DIRS): $(MKDIR) -p $@

Reply to
pozz

Whatever will run on your box.(usually that's make/automake and nothing else)

Reply to
Johann Klammer

Am 09.12.2021 um 11:54 schrieb pozz:

What on earth for?

Subdirectories for sources are necessary to organize our work, because humans can't deal too well with folders filled with hundreds of files, and because we fare better with the project's top-down structure tangibly represented as a tree of subfolders.

But let's face it: we very rarely even look at object files, much less work on them in any meaningful fashion. They just have to be somewhere, but it's no particular burden at all if they're all in a single folder, per primary build target. They're for the compiler and make alone to work on, not for humans. So they don't have to be organized for human consumption.

That's why virtually all hand-written Makefiles I've ever seen, and a large portion of the auto-generated ones, too, keep all of a target's object, list and dependency files in a single folder. Mechanisms like VPATH exist for the express purpose of easing this approach, and the built-in rules and macros also largely rely on it.

The major exception in this regard is CMake, which does indeed mirror the source tree layout --- but that's manageable for them only because their Makefiles, being fully machine-generated, can become almost arbitrarily complex, for no extra cost. Nobody in full possession of their mental capabilities would ever write Makefiles the way CMake does it, by hand.

Reply to
Hans-Bernhard Bröker

There are other automatic systems that mirror the structure of the source tree for object files, dependency files and list files (yes, some people still like these). Eclipse does it, for example, and therefore the majority of vendor-supplied toolkits since most are Eclipse based. (I don't know if NetBeans and Visual Studio / Visual Studio Code do so - these are the other two IDE's commonly used by manufacturer tools).

The big advantage of having object directories that copy source directories is that it all works even if you have more than one file with the same name. Usually, of course, you want to avoid name conflicts - there are risks of other issues or complications such as header guard symbols that are not unique (they /can/ include directory information and not just the filename, but they don't always do so) and you have to be careful that you #include the files you meant. But with big projects containing SDK files, third-party libraries, RTOS's, network stacks, and perhaps files written by many people working directly on the project, conflicts happen. "timers.c" and "utils.c" sound great to start with, but there is a real possibility of more than one turning up in a project.

It is not at all hard to make object files mirror the source tree, and it adds nothing to the build time. For large projects, it is clearly worth the effort. (For small projects, it is probably not necessary.)

Reply to
David Brown

But sometimes, we do look at them. Especially in an embedded context. One example could be things like stack consumption analysis. Or to answer the question "how much code size do I pay for using this C++ feature?". "Did the compiler correctly inline this function I expected it to inline?".

And if the linker gives me a "duplicate definition" error, I prefer that it is located in 'editor.o', not '3d3901cdeade62df1565f9616e607f89.o'.

The main reason I'd never write Makefiles the way CMake does it is that CMake's makefiles are horribly inefficient...

But otherwise, once you got infrastructure to place object files in SOME subdirectory in your build system, mirroring the source structure is easy and gives a usability win.

Stefan

Reply to
Stefan Reuther

Il 10/12/2021 19:44, David Brown ha scritto:

Atmel Studio, now Microchip Studio, that is based on Visual Studio mirrors exactly the source tree to the build dir.

Yes, these are the reasons why I'd like to put object files in subdirectories.

Ok, it's not too hard (nothing is hard when you know how to do it), but it's not that simple too.

Reply to
pozz

Of course.

And once you've got a makefile you like for one project, you copy it for the next. I don't think I have started writing a new makefile in 25 years!

Reply to
David Brown

Too true.

You don't write a Makefile from scratch any more than you sit down with some carbon, water, nitrogen, phosphorus and whatnot and make an apple tree.

You look around and find an nice existing one that's closest to what you want, copy it, and start tweaking.

--
Grant
Reply to
Grant Edwards

Am 11.12.2021 um 10:01 schrieb Stefan Reuther:

In my experience, looking at individual object files does not occur in embedded context any more often than in others.

That one's actually easier if you have the object files all in a single folder, as the tool will have to look at all of them anyway, so it helps if you can just pass it objdir/*.o.

Or to

Both of those are way easier to check in the debugger or in the mapfile, than by inspecting individual object files.

Both are equally useless. You want to know which source file they're in, not which object files.

Do you actually use a tool that obfuscates the o file nimes like that?

I don't think you've actually mentioned a single one, so far. None of the things you mentioned had anything to do with _where_ the object files are.

Reply to
Hans-Bernhard Bröker

Am 10.12.2021 um 19:44 schrieb David Brown:

Setting aside the issue whether the build can actually handle that ("module names" in the code tend to only be based on the basename of the source, not its full path, so they would clash anyway), that should remain an exceptional mishap. I don't subscribe to the idea of making my everyday life harder to account for (usually) avoidable exceptions like that.

Reply to
Hans-Bernhard Bröker

Neither in my experience, but this is because I look at individual object files even for desktop/server applications, but I don't expect that to be the rule :)

For me, 'objdump -dr blah.o | less' or 'nm blah.o | awk ...' is the easiest way to answer such questions. The output of 'objdump | less' is much easier to handle than gdb's 'disas'. And how do you even get function sizes with a debugger?

I want to know in what translation unit they are in. It doesn't help to know that the duplicate definition comes from 'keys.inc' which is supposed to be included exactly once. I want to know which two translation units included it, and for that it helps to have the name of the translation unit - the initial *.c/cpp file - encoded in the object file name.

Encoding the command-line that generates a file (as a cryptographic hash) into the file name is a super-easy way to implement rebuild-on-rule-change. I use that for a number of temporary files.

I do not use that for actual object files for the reasons given, but it would technically make sense.

There are no hard technical reasons. It's all about usability, and that's about the things you actually do. If you got a GUI that takes you to the assembler code of a function with a right-click in the editor, you don't need 'objdump'. I don't have such a GUI and don't want it most of the time.

Stefan

Reply to
Stefan Reuther

Nor do I. But as I said, and as others know, supporting object files in a tree is not difficult in a makefile, and it is common practice for many build systems. I can't think of any that /don't/ support it (not that I claim to have used a sizeable proportion of build systems).

If it is easy to avoid a particular class of problem, and have a nice, neat structure, then what's the problem with having object files in a tree?

After all, the basic principle of an automatically maintained makefile (or other build system) is:

  1. Find all the source files - src/x/y/z.c - in whatever source paths you have specified.
  2. Determine all the object files you need by swapping ".c" for ".o", and changing the "src" directory for the "build" directory, giving you a list build/x/y/z.o.
  3. Figure out a set of dependency rules for these, either using something like "gcc -M...", or the lazy method of making all object files depend on all headers, or something inbetween.
  4. Make your binary file depend on all the build/x/y/z.o files.

As I see it, it is simpler, clearer and more natural that the object files (and dependencies, lists files, etc.) follow the structure of the source files. I'd have to go out of my way to make a riskier system that put all the object files in one place.

Reply to
David Brown

Used cygwin for years just to have access to the unix utils and X so I could run my favourite nedit fs editor, Never ran compilers though it, but hassle free experience once setup. That was the 32 bit version, sadly no longer available...

Chris

Reply to
chris

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.