Luckily, Linux has moved to automatic packages and everything, and is usable for novices nowadays. But since I'm interested in internals a little bit, I would like to understand how does the software install from sources work.

Sometimes you get the sources, makefile and config files to build the source. But I have never fully understood such distribution internals, even though I'm a programmer myself, I've been programming mostly with MS stack.

Is the approach standartized? Makefiles are not just the compile script, they can copy files to system locations, create symlinks, envvars and so on.

So basically, sometimes you get a source to ~\Download, run make config, make, and some magic happens. Unless there are a lot of intermediate steps required.

But I don't fully understand the magic here. Does the source get's copied to /bin? /sbin? /opt?, is it symlinked, how is the binary linked for other users, is it safe to delete the source afterwards? Or do you have to review every single makefile in makefile chain to figure out what's happening?

Does anyone know a short, but good description on how this all works, or should work?

That gives some decent information. Obviously it doesn't go into the depth you'd like, but it's a decent place to start. I highly suggest you try and program in a *nix environment to try everything out. In college I started out programming(C++) at the command line in Gentoo and it really helped me learn a lot about how these things work.

There are quite a few people here that can explain this better. I haven't programmed in years now.

I'd recommend installing Stow to manage your locally-compiled packages. To make use of it, you'll need to type "./configure --prefix=/usr/local/stow/PACKAGENAME-version [1]" (in addition to other arguments to the "./configure" command), and then once the package is installed you cd to /usr/local/stow and type "sudo stow PACKAGENAME-version" to install symlinks into the /usr/local/ hierarchy. When you want to remove them (e.g. to upgrade to a newer version) you cd to /usr/local/stow and type "sudo stow -D PACKAGENAME-version".

Madman wrote:Luckily, Linux has moved to automatic packages and everything, and is usable for novices nowadays. But since I'm interested in internals a little bit, I would like to understand how does the software install from sources work.

The easiest way is to look at some of the automatic package managers and figure out what they do. Here is the user guide for RPM. The first chapter more or less explains what must be done to install a package and why the automatic package managers are a good idea.

But I don't fully understand the magic here. Does the source get's copied to /bin? /sbin? /opt?, is it symlinked, how is the binary linked for other users, is it safe to delete the source afterwards? Or do you have to review every single makefile in makefile chain to figure out what's happening?

Once you have built the binary, this is what will be use and it won't care even if you delete the source altogether. All you need is a soft link to the binary somewhere in your PATH (e.g. /usr/bin). Try it yourself with Hello World.

I don't think there is a standard way to install stuff. Some of the software I've dealt with has a "make install" option which seems to do everything via the Makefile while other software uses shell or Python scripts for the installation and the Makefile only for compiling.

I have done some programming in Linux, the difference is that hello world is a binary, usually even without .so or .o whichever it was for dynamic link libraries. But the packages like, for example, Python are executables + libs + libs in hierarchy, like libcwhateverversion.so.

And the second thing is that, usually, linked executable is copied to source relative folder for example make at ~/Projs/SomeApp/src/* would seem to make ~/Projs/SomeApp/bin/runme, at least that's how it works in Windows. Which would break if the SomeApp folder is deleted but some sort of symmlink is left in the system.

A fairly concise description of the "Standard" that most distros at least pay some attention to. It should be noted that prototype versions of some system software (written by folks at RedHat/Fedora) may change some of the file system layouts in coming versions of most distros.

In a short description, typically the "configure" script should sort out how it works on your distribution and create the appropriate files including Makefiles and header files. You run "make" and it will build a binary (or library) runable only by you in your own directory. After you've checked that it works you then become root and run "make install" and it will copy the binary plus any other needed bits (man files, default config files) in the places that the configure script worked out. Your program (or library) is now usable by every user.

After the "make install" you can blow away the source directory and the binaries that you used for building because it did a copy. Distribution packages like .rpm or .deb files just contain what the "make install" did, no source or other intermediate steps. That is unless you are using a distribution that builds everything from scratch like Gentoo. The Gentoo users will reply next week once they have finished building their latest updates

Something else to keep in mind is that you don't need to be root to build a package, but you need to be root to install it to a shared location where other users (or the system, in the case of a system service) can run it. Depending on the package, it may be possible to run it locally (from your home directory) without installing it into the system directories.

The exact locations of files can vary slightly from distro to distro, and there isn't always a hard-and-fast rule for where something belongs. E.g. you will sometimes see "optional" components installed under the /opt directory hierarchy, and other times they get put under /usr.

Regarding the distinction between the different types of object code files:

A binary (typically without any extension) is a ready-to-run file containing executable machine code; in order for the system to recognize it as a runnable binary, the execute (x) bit must be set in its directory entry.

.o files are compiled object code corresponding to a single source code file. A .o file is not runnable as-is; it must be processed by the linker to resolve references to any other modules that make up the application it is a part of, and to any statically and dynamically linked libraries it uses (the linker is what produces the final binary).

.a files are searched by the linker to resolve references to library functions. They can contain actual object code of the library (for static linking), or information which the linker can embed in the binary to allow the library function to be found at runtime (dynamically linked shared libraries).

Almost right notfred. A good explanation except a program is not a library. In fact one of the main reasons a ./configure will fail is missing libraries.

A library, a dll in windows, a .so in *nix is what it says a collection of books. Well a collection of data and information anyway. This is shared information any program can call on, for the information in that library.

PenGun wrote:Almost right notfred. A good explanation except a program is not a library. In fact one of the main reasons a ./configure will fail is missing libraries.

Yup and I didn't think I said that a program was a library. Some packages build a library rather than an executable, that's how you fix the missing libraries if you cannot get it from your distribution, that's all I was trying to get across.

PenGun wrote:Almost right notfred. A good explanation except a program is not a library. In fact one of the main reasons a ./configure will fail is missing libraries.

Yup and I didn't think I said that a program was a library. Some packages build a library rather than an executable, that's how you fix the missing libraries if you cannot get it from your distribution, that's all I was trying to get across.