picoTCP (an open source embedded TCP/IP stack) has always had a development focus (both in picoTCP itself and with picoTCP) on linux & gcc systems. The final target is usually an embedded micro (ARM based). For this reason we put quite some logic into the makefile to keep modularity high. We're also using compile flags to enable/disable all of these modules.

We start to notice that Linux isn't the only developer system (here I mean while using picoTCP in your own project) in the world and that Windows together with a bunch of IDEs with each their own (proprietary) compiler aren't good friends with Makefiles.

This usually means that these users have to manually add the picoTCP files to their project and expand their build system. This also means that people get into issues like

which files to include (because the logic is in the Makefile)

in which order they should be included

manually update all these files when there's a new version

We're currently looking at the different ways of how to distribute picoTCP in a more convenient way and would like to know the pros and cons.

We've currently identified a couple of ways

Generate .a file

(+) Clean solution that keeps using the existing Makefile

(-) Needs a Linux & gcc environment

(-) Only works for the same target compiler/system

Generate a single .c and .h file

This is something the Mongoose library from Cesanta does

(+) Very portable & simple to use

(-) Nightmare to do debugging

(-) Will need external scripts to remove #includes and merge files together

(-) All modules would be included, therefore we'll probably have to add some more compilation flags

Not being an embedded and/or C developer, I am a little confused what you are asking about. It is unclear to me, when you say "development" do you mean development of a product using picoTCP or do you mean development of picoTCP? And when you say "focus on GCC", are you similarly referring to using GNU make/GCC to generate the distribution files from the development files or are you talking about building the final product that uses picoTCP? In your README, you mention a long list of supported compilers, so I guess it is the former? Maybe those points are obvious to an embedded …
– Jörg W MittagDec 30 '16 at 21:33

1

… developer, but I'm guessing you might get some interesting inputs from other developers as well, so maybe you could clarify that for those not deeply into the matter? Thanks!
– Jörg W MittagDec 30 '16 at 21:34

1

I've clarified this somewhat in the question. Our main concern is towards the end users who use picoTCP in their project.
– PhaloxDec 31 '16 at 8:50

2

Can't you utilize cmake? This will allow to generate Makefiles or Visual Studio project files for all kind of different targets?
– Doc BrownDec 31 '16 at 9:09

1

@Phalox: cmake itself is cross-platform, it is an established standard for a huge number of platforms, and it can generate project/makefiles for almost any sensible build system the users of your lib might be using. Nevertheless you are right: it makes things probably more complicated than distributing a single .h/.c file without any platform specific project or makefiles.
– Doc BrownJan 1 '17 at 12:00

1 Answer
1

Neither of these is a good option. The former is, for the reason you covered, platform-specific. The latter leaves anyone wanting to alter the set of modules compiled into their binary dependent on you continuing to run a server that can generate a new version. You could provide the sources for that server, but having to set one up just for that would be inconvenient or impossible in some shops.

Generate a single .c and .h file

I think this is the best way to go since you can generate it as part of your distribution and it's the same every time.

(+) Very portable & simple to use

The portability issue is a big thing in the embedded space. There are lots of odd little development environments for various platforms that don't provide a lot of the kinds of features you find in a Unixy environment, so anything you can do to make it easier to incorporate will increase adoption.

An added bonus is that many compilers can do size and speed optimizations on a single file that aren't possible when the code is split. Being able to squeeze your code into a smaller footprint and wring more out of it is welcome in constrained environments.

(-) Nightmare to do debugging

Not as much as you'd think. Most debuggers don't care if you have one giant file or 50 little ones. Once you've worked the kinks out of determining what gets included and what doesn't, you probably won't notice it because most debugging is done intra-function. Odds are quite good that most of the bugs you encounter are going to be due to flaws in the code and will show up in both the un-merged and merged versions. Speaking of same, if you have a test battery, it would be good to run it against both.

You do want to keep the original files separate because it's lots easier to do reviews and poke around in your version control system looking at smaller chunks.

One reason to avoid custom builds for each combination of features is that it effectively puts 2^n versions of the source out there, where n is the number of features you can enable or disable. If somebody points out a bug in line 456 if your sources, you're going to have to determine the set of features is enabled so you know which line 456 it is.

(-) Will need external scripts to remove #includes and merge files together

This shouldn't be too big a deal. The important thing is to make sure the process is automated and repeatable so it becomes a set-and-forget part of your distribution process.

(-) All modules would be included, therefore we'll probably have to add some more compilation flags

That shouldn't be too disruptive, especially if you already have feature selection built into the code. If you're doing it as part of your build environment (i.e., the decision to include or exclude something is part of your Makefile, I would recommend getting away from that.

Before diving into a single-file distribution, I would highly recommend studying SQLite. That project has been around for 16 years, is pretty much the gold standard for that sort of thing and does many of the things you're looking at doing.

Awesome comment! I'll leave this question still open for a bit to see if there are more ideas though I doubt it. You've added some extra items to consider, but your case is very clear!
– PhaloxDec 31 '16 at 8:55

Maybe one more clarification: while debugging, it's quite useful to know in which module you are (modularity comes with layering, which makes everything harder to follow). Any suggestions for this?
– PhaloxDec 31 '16 at 8:57