When old OSs (like DOS) were used, compiling by separate modules was mandatory as soon as the program was not very short, because of the small memory size available for the compiler (about 200 KB with Quick Basic 4.5).Now, with modern PC and OS, this compilation limit is pushed back by a factor of about 10000, and the compile time for a large file in one go has become acceptable especially with FreeBASIC.This allows to use another method (than a library of compiled files) to manage the reusable user procedures.

First, the reusable user procedures source codes are grouped into different source modules, for example by functionality.Then the process is different depending on each method used.

To simplify the explanation that follows, it is considered that there is only one source file containing the main program, which calls the various user procedures contained in source modules for reuse. But one can easily complete these methods to take into account a main program spread over several source files.

1) First method: The compiled modules stored in a library file are included in the linking process with the compiled main program

The principle of this method is by only accessing to the compiled user procedures.

The different modules (containing the sources of the user procedures to be reused) are turned in to object files and then stored into a library file (using the '-lib' compile option).Finally the source file of the main program is compiled, then linked to the library, to make an executable (using the '-l < libname >' compile option, or the '#inclib "libname"' directive put at beginning of the main program source code).

For each compiled module, if at least one procedure is called, then the entire module will be added in the final executable.

Thus, the granularity of the added code to the executable is at the module level (coarse granularity).

2) Second method: The source modules are included directly in the main source program to be compiled in one go

The principle of this method is by fully accessing to the sources of user procedures.

The different modules (containing the sources of the user procedures to be reused) are directly included in the source of the main program (using the '#include [once] "file"' directive for each module, put at the beginning of the main program source code).Finally, the big resulting source file is compiled in one go to make it an executable.

Since the compiler processes a single source file, all reusable user procedures can be declared as Private (which is obviously impossible when using library because the external links are required during the linkage).As a result, only the Private procedures really called will be kept in the executable.

Thus, the granularity of the code added to the executable is at the elementary procedure level (fine granularity).

Note:

The compiler removes the Private procedures that are not called, but this does not currently work for Private procedures that are only called by other Private procedures that are not called themselves, because the first ones appear as being called.The problem is that the one-pass compiler only uses a simple flag to track the "used" state of a procedure, which is set whenever the procedure is accessed, no matter from where.

I am not understanding something here.If I use private functions in code and compile that code to a static library (-lib switch), then how can I access these functions?Example, some functions, all private except arcsinh()I compile with -lib.testmath.bas

This technique is an alternative for those that prefer to use source code instead of modules, libraries or DLLs. The purpose is to have collections of reusable code that does not bloat the executable with unused procedures. If you want to use libraries, then, of course, don't use Private.

Last edited by Josep Roca on May 28, 2018 12:55, edited 1 time in total.

1st method: by using a libraryThe procedures must be Public, because the external links are required during linkage.

2nd method: by including all sources of the user procedures into the main programThe user procedures can be Private, because no external links are required (all sources are compiled together in one go).

No mention was made of public functions (must) be used in a library.This is not intuitive for the thick members (myself included)Also you miss out on dll's which automate the selection anyway, in fact this is their raison d'etre (excuse my French)

fxm wrote:When old OSs (like DOS) were used, compiling by separate modules was mandatory as soon as the program was not very short, because of the small memory size available for the compiler (about 200 KB with Quick Basic 4.5).

Well, it is also mainly because then you only have to recompile changed modules, and often every module has its own namespace, so you don't have to micromanage every identifier to be globally unique.

Moreover you can delegate initialization/finalization to modules. Stop using the module, and its init/finit code is no longer run. No more micromanaging initialization procedures. Or at least having the option to.

More minor objectives are improved compiler error generation because the compiler checks if modules match, contrary to the C model where everything only meets at the linker level.

Now, with modern PC and OS, this compilation limit is pushed back by a factor of about 10000, and the compile time for a large file in one go has become acceptable especially with FreeBASIC.

Not really? The faster the better. Preferably if you make a trivial change and hit compile/run in your IDE, the EXE is already starting (Delphi works that way).

This allows to use another method (than a library of compiled files) to manage the reusable user procedures.

Why would you want to devolve this?

The different modules (containing the sources of the user procedures to be reused) are turned in to object files and then stored into a library file (using the '-lib' compile option).Finally the source file of the main program is compiled, then linked to the library, to make an executable (using the '-l < libname >' compile option, or the '#inclib "libname"' directive put at beginning of the main program source code).

For each compiled module, if at least one procedure is called, then the entire module will be added in the final executable.

Well, the compiler must then still parse header generations for all files in that library to make sure the exported symbols definitions match the ones called in e.g. .the main source.

Nearly all linkers work on the symbol (variables, functions) level since nearly forever. Only GNU LD on Windows was relatively late with that (in the 2000s)

I don't see what all this adds/improves, except making a proven solution slower.

@dodicat,The subject of this article is not "How to Work with a Library", but "How to Work Differently than with a Library".You are free to write an article like "How to well Work with a Library (static or dynamic)".

> I don't see what all this adds/improves, except making a proven solution slower.

1.- You get dead code removal without having to compile each separate procedure as an object file.

2.- You also get dead code removal with classes (types).

3.- You can use conditional compilation and also macros.

4.- You don't need to rebuild libraires each time that you make a change. Since we are working with source code, just change it.

5.- The same code works with 32 and 64 bit. No need to build separate libraries for both.

6.- No need for import libraries.

With modern computers, only the first time that you use the include files is the compilation a bit slower. Then, as they're in the caché, compilation is as fast as using libraries. Maybe this technique will be slow with certain compilers, but certainly not with FreeBasic or PowerBasic.

Last edited by Josep Roca on May 28, 2018 14:27, edited 1 time in total.