5 Answers
5

It depends. If you have a small amount of memory, the use of modules may improve the resume as they are not reloaded every time (I found it significant on 2 GiB of RAM but not on 4 GiB on traditional harddrives). This is especially true when due to some bug in the battery module (regardless of being compiled-in or as module), it took very long to start (several minutes). Even without bug on gentoo I managed to shorten time (reported by systemd-analysis) from 33s to 18s just by changing from statically compiled kernel to modules - 'surprisingly' the start of kernel changed from 9s to 1.5s.

Also, when you don't know what hardware are you going to use, modules are clearly beneficial.

PS. You can compile even vital drivers as modules as long as you include them in initrd. For example, distros will include the filesystem of /, drivers of harddrive etc. in initrd on installation.

I think you will gain a few kB of kernel memory as the granularity of allocations is one page, so on typical architecture each module wastes an average of about 2kB (½ page) per would-be module. Even on embedded systems, that's hardly significant. You also gain a little disk space as the modules can be compressed in the same go as the kernel; that can be more relevant in embedded systems with little storage.

If you can dispense with modules altogether, you save a little kernel memory (no need for the module loader), disk space (no need for the module utilities), and system complexity (no need to include module loading as a feature in your distribution). These points are quite attractive in some embedded designs where the hardware is not extensible.

To access a symbol in a module is a tiny bit slower (an indirection is involved). But the added flexibility (can load/unload as required, don't have to build a hand-tailored kernel for the exact hardware in use, ...) is well worth it (unless you are building for a very constrained environment that doesn't ever change).
–
vonbrandJan 16 '13 at 16:25

I'm thinking about some performance improve? Is there any at all?
–
phuneheheSep 5 '10 at 10:23

1

In the past people have went to great lengths to produce the smallest possible kernel with only what is required. Today this has largely changed. In fact, whenever you first load a module you will suffer a small performance hit. That is not to say you should compile everything in the kernel :-) See this: articleinput.com/e/a/title/…
–
nc3bSep 5 '10 at 11:10

for not-vital driver or functionality is there any benefit? for example if I just want to use terminal and bunch of network utility without other need or functionality. is there any benefit if I just compile all needed module and driver inside kernel without loadable module?
–
uraySep 5 '10 at 11:19

2

Bull. A system will boot fine with a SCSI driver as a module in an initrd. That's what they are for.
–
wzzrdSep 5 '10 at 20:35

A couple potential benefits. Performance is an arguable one. You'd avoid some runtime overhead associated with a dynamic loader, but I doubt that's a big deal unless you're depending on a real-time scheduler.

If you're taking advantage of large pages on your system, then perhaps creating a larger static kernel image means you make more efficient use of the page descriptor cache. Some systems will 'cage' the kernel so that it packs tightly into one memory locality, which can alleviate some amount of delay due to minor, and possibly major, page faults.

It might suit you, architecturally, to deliver One Big Image, arguing that fewer independent modules is easier to maintain and the loss of flexibility is not important. A lot of this kind of reasoning ventures into matters of style and practice.