Also it seems that on Solaris this exact issue doesn't show up at compile time,
but at run time,
so you may see the errors like:

.../mod_perl-1.99_17/blib/arch/auto/APR/APR.so' for module APR:
ld.so.1: /usr/local/ActivePerl-5.8/bin/perl: fatal:
libgdbm.so.3: open failed: No such file or directory at
...5.8.3/sun4-solaris-thread-multi/DynaLoader.pm line 229.

the solution is the same, make sure that you have the libgdbm shared library and it's properly symlinked.

Sometimes you get a problem of perl not being able to locate a certain Perl module. This can happen in the mod_perl test suite or in the normal mod_perl setup. One of the possible reasons is a low limit on the number of files that can be opened by a single process. To check whether this is the problem run the process under strace(1) or an equivalent utility.

For example on OpenBSD 3.5 the default setting for a maximum number of files opened by a single process seems to be 64, so when you try to run the mod_perl test suite, which opens a few hundreds of files, you will have a problem. e.g. the test suite may fail as:

That error message means that mod_perl was built against Apache released on or post-20020628, but you are trying to load it against one released on or post-20020903. You will see the same error message for any other Apache module -- this is an error coming from Apache, not mod_perl.

Apache bumps up a special magic number every time it does a binary incompatible change, and then it makes sure that all modules that it loads were compiled against the same compatibility generation (which may include only one or quite a few Apache releases).

You may encounter this situation when you upgrade to a newer Apache, without rebuilding mod_perl. Or when you have several versions of Apache installed on the same system. Or when you install prepackaged binary versions which aren't coming from the source and aren't made against the same Apache version.

The solution is to have mod_perl built against the same Apache installed on your system. So either build from source or contact your binary version supplier and get a proper package(s) from them.

First you need to figure out where it hangs. strace(1) or an equivalent utility can be used to discover which call the server hangs on. You need to start the process in the single server mode so you will have only one process to monitor.

(and may be -DPERL_USEITHREADS if it was in the original output of make test.)

If the trace ends with:

open("/dev/random", O_RDONLY) = 3
read(3, <unfinished ...>

then you have a problem with your OS, as /dev/random doesn't have enough entropy to give the required random data, and therefore it hangs. This may happen in apr_uuid_get() C call or Perl APR::UUID->new.

The solution in this case is either to fix the problem with your OS, so that

% perl -le 'open I, "/dev/random"; read I, $d, 10; print $d'

will print some random data and not block. Or you can use an even simpler test:

% cat /dev/random

which should print some random data and not block.

If you can't fix the OS problem, you can rebuild Apache 2.0 with --with-devrandom=/dev/urandom - however, that is not secure for certain needs. Alternatively setup EGD and rebuild Apache 2.0 with --with-egd. Apache 2.1/apr-1.1 will have a self-contained PRNG generator built-in, which won't rely on /dev/random.

httpd-2.0 is not very helpful at telling which device has run out of precious space. Most of the time when you get an error like:

(28)No space left on device:
mod_rewrite: could not create rewrite_log_lock

it means that your system have run out of semaphore arrays. Sometimes it's full with legitimate semaphores at other times it's because some application has leaked semaphores and haven't cleaned them up during the shutdown (which is usually the case when an application segfaults).

Use the relevant application to list the ipc facilities usage. On most Unix platforms this is usually an ipcs(1) utility. For example linux to list the semaphore arrays you should execute:

When building mod_perl 2.0 on HP-UX 11 for PA-RISC architecture, using the HP ANSI C compiler, please make sure you have installed patches PHSS_29484 and PHSS_29485. Once installed the issue should go away.

that usually means that you've build your non-mod_perl modules with ithreads enabled perl. Then you have built a new perl without ithreads. But you didn't nuke/rebuild the old non-mod_perl modules. Now when you try to run those, you get the above segfault. To solve the problem recompile all the modules. The easiest way to accomplish that is to either remove all the modules completely, build the new perl and then install the new modules. You could also try to create a bundle of the existing modules using CPAN.pm prior to deleting the old modules, so you can easily reinstall all the modules you previously had.

Certain editors (in particular on win32) may add a UTF-8 Byte Order Marker (BOM: http://www.unicode.org/faq/utf_bom.html#BOM) at the beginning of the file. Since ModPerl::RegistryCooker adds extra code in front of the original script, before compiling it, it creates a situation where BOM appears past the beginning of the file, which is why the error:

Unrecognized character \xEF at ...

is thrown by Perl.

The simplest solution is to configure your editor to not add BOMs (or switch to another editor which allows you to do that).

You could also subclass ModPerl::RegistryCooker or its existing subclasses to try to remove BOM in ModPerl::RegistryCooker::read_script():

but do you really want to add an overhead of this operation multiple times, when you could just change the source file once? Probably not. It was also reported that on win32 the above s/// doesn't work.

For example some people have reported problems with DBD::Oracle (whose guts are implemented in C), which doesn't see environment variables (like ORACLE_HOME, ORACLE_SID, etc.) set in the perl script and therefore fails to connect.

The issue is that the C array environ[] is not thread-safe. Therefore mod_perl 2.0 unties %ENV from the underlying environ[] array under the perl-script handler.

The DBD::Oracle driver or client library uses getenv() (which fetches from the environ[] array). When %ENV is untied from environ[], Perl code will see %ENV changes, but C code will not.

The modperl handler does not untie %ENV from environ[]. Still one should avoid setting %ENV values whenever possible. And if it is required, should be done at startup time.

In the particular case of the DBD:: drivers, you can set the variables that don't change ($ENV{ORACLE_HOME} and $ENV{NLS_LANG} in the startup file, and those that change pass via the connect() method, e.g.:

Also remember that DBD::Oracle requires that ORACLE_HOME (and any other stuff like NSL_LANG stuff) be in %ENV when DBD::Oracle is loaded (which might happen indirectly via the DBI module. Therefore you need to make sure that wherever that load happens %ENV is properly set by that time.

Another solution that works only with prefork mpm, is to use Env::C ( http://search.cpan.org/dist/Env-C/ ). This module sets the process level environ, bypassing Perl's %ENV. This module is not thread-safe, due to the nature of environ process struct, so don't even try using it in a threaded environment.

Apache uses the sendfile syscall on platforms where it is available in order to speed sending of responses. Unfortunately, on some systems, Apache will detect the presence of sendfile at compile-time, even when it does not work properly. This happens most frequently when using network or other non-standard file-system.

After a successful mod_perl build, sometimes during the startup or a runtime you'd get an "undefined symbol: foo" error. The following is one possible scenario to encounter this problem and possible ways to resolve it.

Let's say you ran mod_perl's test suite:

% make test

and got errors, and you looked in the error_log file (t/logs/error_log) and saw one or more "undefined symbol" errors, e.g.

that means that the library was stripped. You probably want to obtain Apache 2.x or libapr source, matching your binary and check it instead. Or rebuild it with debugging enabled, which will not strip the symbols.

Note that the "grep table_compress" is only an example, the exact string you are looking for is the name of the "undefined symbol" from the error_log file. So, if you get:

Those are name => value pairs showing the shared libraries used by the httpd binary.

Take note of the value for libapr-0.so.0 and compare it to what you got in step 1. They should be the same, if not, then mod_perl was compiled pointing to the wrong Apache installation. You should run "make clean" and then

You should also search for extra copies of libapr-0.so.0. If you find one in /usr/lib or /usr/local/lib that will explain the problem. Most likely you have an old pre-installed apr package which gets loaded before the copy you found in step 1.

On most Linux (and Mac OS X) machines you can do a fast search with:

% locate libapr-0.so.0

which searches a database of files on your machine. The "locate" database isn't always up-to-date so a slower, more comprehensive search can be run (as root if possible):

This warning is normally as a result of variables that your script is sharing with subroutines globally, rather than passing by value or reference. As the cause and solution of this is virtually identical to another commonly encountered problem (Sometimes it works, sometimes it doesn't), the text is not repeated here but is instead included in that section which follows this one.

You may have read somewhere out there that this warning can be ignored, but if you read on you will see that you should never ignore the warning. The other thing that might confuse you is that this warning is normally encountered when defining subroutines within subroutines. So why would you experience it in your script where that is not the case? The reason is because mod_perl wraps your script in its own subroutine (see the Perl Reference documentation for more details).

When you start running your scripts under mod_perl, you might find yourself in a situation where a script seems to work, but sometimes it screws up. And the more it runs without a restart, the more it screws up. Often the problem is easily detectable and solvable. You have to test your script under a server running in single process mode (httpd -X).

Generally the problem is the result of using global variables (normally accompanied by a Variable $x will not stay shared at warning). Because global variables don't change from one script invocation to another unless you change them, you can find your scripts do strange things.

The first example is amazing: Web Services. Imagine that you enter some site where you have an account, perhaps a free email account. Having read your own mail you decide to take a look at someone else's.

You type in the username you want to peek at and a dummy password and try to enter the account. On some services this will work!!!

You say, why in the world does this happen? The answer is simple: Global Variables. You have entered the account of someone who happened to be served by the same server child as you. Because of sloppy programming, a global variable was not reset at the beginning of the program and voila, you can easily peek into someone else's email! Here is an example of sloppy code:

Do you see the catch? With the code above, I can type in any valid username and any dummy password and enter that user's account, provided she has successfully entered her account before me using the same child process! Since $authenticated is global--if it becomes 1 once, it'll stay 1 for the remainder of the child's life!!! The solution is trivial--reset $authenticated to 0 at the beginning of the program.

A cleaner solution of course is not to rely on global variables, but rely on the return value from the function.

Just another little one liner that can spoil your day, assuming you forgot to reset the $allowed variable. It works perfectly OK in plain mod_cgi:

$allowed = 1 if $username eq 'admin';

But using mod_perl, and if your system administrator with superuser access rights has previously used the system, anybody who is lucky enough to be served later by the same child which served your administrator will happen to gain the same rights.

Another good example is usage of the /o regular expression modifier, which compiles a regular expression once, on its first execution, and never compiles it again. This problem can be difficult to detect, as after restarting the server each request you make will be served by a different child process, and thus the regex pattern for that child will be compiled afresh. Only when you make a request that happens to be served by a child which has already cached the regex will you see the problem. Generally you miss that. When you press reload, you see that it works (with a new, fresh child). Eventually it doesn't, because you get a child that has already cached the regex and won't recompile because of the /o modifier.