His site also includes things like tutorials on "Optimizing with gcc and glibc" and "How to Write Shared Libraries"_________________DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!

All perfectly reasonable arguments but no answer to the library version mayhem which is the curse of Linux. That is why people choose to use static libraries._________________Classic Opera 12.16 browser SFS package for Precise, Slacko, Racy, Wary, Lucid, etc available here

I mentioned the other day: I tend to wonder if "dependency hell" on Linux is mostly just urban legend.

I do see it on Windows at work all the time. Most programs will have their own versions of all their dependencies, kept in their own folders (just as bad as static linking for wasting space and bandwidth). But the odd application will install things in the main "system" folder, and very often these will be obsolete versions, in which case they will break a bunch of other programs, because they override the versions programs keep in their own folder. But if there was a standard "repository" for Windows (there is a real package manager, used by cygwin and osgeo4w and various things), there wouldn't be a problem, because all the programs would be compiled against the same current libs._________________DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!

I guess what I'm saying is really that if the package management system works acceptably, it is the answer to your "mayhem"._________________DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!

puppy is mostly dynamically linked --- thats why when you "upgrade" libraries, things can break ... not so when statically linked

go ahead and upgrade libxcb... and have fun
upgrade from gtk2.16 to any version past 2.18 and be annoyed

I would take any advice from the maintainer of wontfix-libc with a grain of salt

the problem isn't really shared libs either - its the crap GNU tools we use to build them that unnecessarily links in symbols that are not needed because
pkg-config wrongly says to do so (or auto* thinks it did)

but the stupid autotools link in the entire friggin' dependency toolchain directly, causing every used function to get its own special spot in the global offset table so that it can theoretically start .0000001s faster so long as nothing _ever_ moves, changes, gets rebuilt with slightly modified options or compiler flags ... then it loads much much much slower (not to mention, creating an unnecessarily larger binary)
then god forbid you want/need to upgrade to a version with a changed APi ... say xcb ... even though only libX11 directly depends on it (and a few less popular apps) nearly everything built against libX11 will break... no problem - just recompile X11 and your good right? nope - the linker listened to you when you told it to directly link libxcb, so everything you compiled with autotools is now broken_________________Web Programming - Pet Packaging 100 & 101

yes, - jwm doesn't mtpaint doesn't (they have their own configure scripts) and it is perfectly acceptable to have an editable Makefile(s) or other custom build script.

The problem is, we dropped the ball so long ago by not integrating the dev environment with the packaging environment, that now its really too late for a small team to go back and pick up the ball and "do it right" ... but we can at least do it better

On a separate note - just for reference I can build (and have) a fully self contained 2.6.32 kernel with a statically linked and built-in userland including X in a single 1Mb kernel image that will run with <4mb RAM ... not really possible with shared glibc ... but then again I am using multicall binaries built with my own userland build scripts (only because it was easier for me to do it that way... Not something I would want to do for _every_ package by myself)_________________Web Programming - Pet Packaging 100 & 101

The problem is, we dropped the ball so long ago by not integrating the dev environment with the packaging environment, that now its really too late for a small team to go back and pick up the ball and "do it right" ... but we can at least do it better

I'm struggling to follow you here - have you come straight from the package management thread by any chance?
Who is the "we" you refer to? Puppy packagers? Developers of Linux software in general? Distro builders in general?_________________DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!

The problem is, we dropped the ball so long ago by not integrating the dev environment with the packaging environment, that now its really too late for a small team to go back and pick up the ball and "do it right" ... but we can at least do it better

I'm struggling to follow you here - have you come straight from the package management thread by any chance?
Who is the "we" you refer to? Puppy packagers? Developers of Linux software in general? Distro builders in general?

not just Linux, pretty much all *nixes. Have you ever watched what a ./configure script does or gone through one? OMG, what a disaster, but I find it hilarious when I download a 1000 byte program with a 100kb config script - Rob Landley says it best
http://landley.net/notes-2011.html#28-08-2011
*BSD has bsdbuild and others which do essentially the same thing

my point with integrating dev and packaging was that all of the garbage that the configure script does, could already be done by a properly setup package management system
for example (not well thought out, but just a "for instance")
if a library (libmyclib) provides snprintf it could add the following to the "<systemconfig_file>"
#define HAS_SNPRINTF -lmyclib
which would not only tell the system that we have snprintf, but how to link it

... the only plausible way to even attempt this (and get it into mainstream) is to try and shim it into the autotools caching mechanism to make it think it has already verified everything

.....sorry to get off topic, but getting back to shared vs. static

shared libs are vulnerable to this
LD_PRELOAD=/tmp/vicious_attacklib.so <binary>

if a shared lib has a vulnerability ALL of the programs linked against it also do (with a static link, some may have been linked against non-vulnerable versions or the vulnerable code may not even be linked in if it isn't used)

and it is FUD that static binaries are slower (in fact they are ~100-4000% faster)
http://sta.li/faq

Quote:

* fixes (either security or only bug) have to be applied to only one place: the new DSO(s). If various applications are linked statically, all of them would have to be relinked. By the time the problem is discovered the sysadmin usually forgot which apps are built with the problematic library. I consider this alone (together with the next one) to be the killer arguments.

breakages only have to be applied in 1 place too
If you maintain your source tree its pretty easy to figure out using simple tools (find and grep).
and verify changes/lack of changes using edelta or xdelta

Quote:

* Security measures like load address randomization cannot be used. With statically linked applications, only the stack and heap address can be randomized. All text has a fixed address in all invocations. With dynamically linked applications, the kernel has the ability to load all DSOs at arbitrary addresses, independent from each other. In case the application is built as a position independent executable (PIE) even this code can be loaded at random addresses. Fixed addresses (or even only fixed offsets) are the dreams of attackers. And no, it is not possible in general to generate PIEs with static linking. On IA-32 it is possible to use code compiled without -fpic and -fpie in PIEs (although with a cost) but this is not true for other architectures, including x86-64.

yes, because they aren't nearly as vulnerable to the primary vectors for those exploits such as LD_* attacks and ldd escalations (people put locks on doors, not walls)
you _can_ "statically" link a pie, just compile your "static" lib(s) with -fpic (you will get the dirty pages and other pic overhead but at least the unused code will be removed)

Quote:

* more efficient use of physical memory. All processes share the same physical pages for the code in the DSOs. With prelinking startup times for dynamically linked code is as good as that of statically linked code.

no, they share _some_ pages (only read only), add quite a few extra dirty pages and if you prelink then the load times skyrocket once you change a single shared lib - dont ever do it, it will suck almost immediately
I have tested this with a plethora of compiler/linker optimizations, hacks and tricks and the closest I could get to the speed of its static binary counterpart's startup was still only half as fast

Quote:

* all kinds of features in the libc (locale (through iconv), NSS, IDN, ...) require dynamic linking to load the appropriate external code. We have very limited support for doing this in statically linked code. But it requires that the dynamically loaded modules available at runtime must come from the same glibc version as the code linked into the application. And it is completely unsupported to dynamically load DSOs this way which are not part of glibc. Shipping all the dependencies goes completely against the advantage of static linking people site: that shipping one binary is enough to make it work everywhere.

another wontfix glibc bug

Quote:

* Related, trivial NSS modules can be used from statically linked apps directly. If they require extensive dependencies (like the LDAP NSS module, not part of glibc proper) this will likely not work. And since the selection of the NSS modules is up the the person deploying the code (not the developer), it is not possible to make the assumption that these kind of modules are not used.

yet another wontfix glibc bug

Quote:

* no accidental violation of the (L)GPL. Should a program which is statically linked be given to a third party, it is necessary to provide the possibility to regenerate the program code.

seriously - you don't think it is possible to accidentally violate lgpl ... if you can't remember what library version a program is linked against (because you didn't track it), what magic makes you remember patching it to build your code
if you are statically linking there is no doubt whether or not you need to include the static libs of lgpl libs

Quote:

* tools and hacks like ltrace, LD_PRELOAD, LD_PROFILE, LD_AUDIT don't work. These can be effective debugging and profiling, especially for remote debugging where the user cannot be trusted with doing complex debugging work.

exactly, but there are others that do that _aren't_ a giant gaping security hole... tools that aren't designed specifically for shared libraries (strace for instance) do work ... thats like saying hammers don't make very good screwdrivers_________________Web Programming - Pet Packaging 100 & 101

Thanks, that's a great link, although it would be good to see a lot of real-life numbers, particularly as those guys are focused on small programs. As a user I don't think I generally care about small programs (exceptions would be a few things used by shell scripts like pburn, but for most of those you ideally use busybox anyway). Where I would notice a big performance increase is in big programs like browsers. They say

I'm guessing "easily outperform" is a lot less than 4000%... but where are the numbers?
How common are "Good libraries" that

Quote:

implement each library function in separate object (.o) files, this enables the linker (ld) to only extract and link those object files from an archive (.a) that export the symbols that are actually used by a program.

?

================
Off-topic: have you tried Stali or Sabotage Linux or anything? I was quite interested in a distro based around uClibc or something and busybox, but it looked like they weren't that viable (alive and with a good selection of apps). A distro based on static linking would be even more interesting._________________DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!

implement each library function in separate object (.o) files, this enables the linker (ld) to only extract and link those object files from an archive (.a) that export the symbols that are actually used by a program.

?

Not very, I can think of one that does it (dietlibc) but it is not really "Good" for other reasons

Quote:

================
Off-topic: have you tried Stali or Sabotage Linux or anything? I was quite interested in a distro based around uClibc or something and busybox, but it looked like they weren't that viable (alive and with a good selection of apps). A distro based on static linking would be even more interesting.

Goingnuts and I have been trying to mary the best of both worlds by merging the best mix of smaller tools (but still useful) static build advantages, small replacement libraries, the multicall binary (mcb) concept and compiler/linker optimizations
we try to strike a balance between size, functionality etc
for instance one mcb contains the X11 apps ... xinit, Xvesa, jwm, rxvt
another has gtk1 apps ... Rox-Filer, minimum profit, dillo1, Xdialog, mtpaint, aumix
I did do testing on this with multiple apps open compared to the same mcb built on my Wary box's shared libs and resource usage in Wary increased at nearly double the rate per app, which is fairly consistent with the firefox and seamonkey builds from lamarelle.org (in case you want a "real world" example) though they only use static mozilla libs (well mostly )_________________Web Programming - Pet Packaging 100 & 101Last edited by technosaurus on Wed 22 Feb 2012, 14:23; edited 1 time in total

Have you ever watched what a ./configure script does or gone through one? OMG, what a disaster, but I find it hilarious when I download a 1000 byte program with a 100kb config script - Rob Landley says it best

In 2006 when I was choosing to which image editor project I would like to contribute my code, one of my primary criteria was "configure script which I could read" mtPaint fit the bill; GIMP didn't. The rest is history.

technosaurus; Have you done a statistical analysis of lib. common usage?
The first thing to do would be making a list of apps. that are relevant, then take statistics on their libs. to use in separating static and shared libs.
The thought being compile libs. statically if they`re small and seldom used.
Though even common libs. with many different versions would qualify too. The main most commonly used version would be shared and others static.

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou can download files in this forum