I'm doing some market research for Stratopan, which is an experimental service for hosting private CPAN-like repositories of Perl modules in the cloud. I'd like to know how you manage the upgrade and deployment of CPAN modules in your world. Some of the techniques I've seen are:

Install from a local mini CPAN

Vendor packages (RPM, PPM, Debian Pkgs)

Install directly from public CPAN

Stash the tarballs from CPAN in version control

Build the modules directly into our source tree

Use carton and/or cpanfile

Don't use any CPAN modules

Never upgrade any CPAN modules

The sysadmins do everything

Aside from helping me, your answers will shed some light on how CPAN modules are handled in the wild, which could lead to some insights on how to improve the tool chain. Just this past weekend at the QA Hackathon, there was much discussion about improving the CPAN infrastructure, so your feedback will be really helpful to them.

I'd like to know how you manage the upgrade and deployment of CPAN modules in your world.

Seems this question comes up from time to time.

I've some Real World ™ experience with this issue from a job I had long long time ago. It was a very large financial firm that has since become a "fallen flag" after the banking crisis. One of my jobs there was to manage versions of CPAN modules in our environment on a world wide basis which consisted of loading new modules from CPAN and building them for multiple platforms and distributing them. (I'll get to detail on that in a bit)

We also had a plethora of locally home grown modules that were conatantly updated on a continual basis.

Another facet of my job was to one a month look at the emails I got about new and improved CPAN modules and decide which were going to be brought into our environment to update in our module trees.

Distribution was handled by the existence of extensive AFS paths that covered every possible combination of platform that we were supporting Perl under. Even Perl itself was distributed this way as the last item in my list of "todo" items was to bring new versions of Perl into the mix.

Details are a bit fuzzy after lack of use of AFS for about 8 years but there was a mechanism in place where the mount table facility used variables such as (and there may be errors here since it's been so long)

and so on and symlinks on the local host were used to point to the right places from the local hard drive. Automounting then took care of mounting the correct stuff.

So there's an extreme example. I imagine something a lot simpler using NFS would suffice but NFS over trans-Atlantic links would be dicey at best which is why AFS was used instead with local cell servers as appropriate.

We install Perl modules into repos that we deploy from. Rarely, we put pure-Perl modules into our main repo of source code because either we expect to have to make progressive changes to the module or (even more rarely) because there is less overhead (and lead time) in deploying just the one repo.

Mostly, we install modules into a "Perl modules" repo which we deploy as part of setting up a system. Such repos have a separate directory for each platform we deploy to. We get the modules into the repo by checking out the appropriate subdirectory on an appropriate system and using whatever tools to install the module into that subdirectory.

Rarely we fix bugs in a "Perl modules" repo. So it is good to have "deploy" just be "copy the files" not crazy things like "run this tool" or even "compile this code". Much more worrying would be if deploy included "download this list of modules". It is also good that the work of getting a module installed only has to be done once per module per platform.

Unfortunately, the people maintaining theses repos failed to keep track of which versions of which modules have been included. So we have a backlog item to catalog that information (as a pre-requisite for evaluating the impact of standardizing on a newer version of Perl yet again).

It is good that the repo tracks what files got changed when. They just don't directly track which module distribution(s) got installed that caused those files to be added or updated. But it is also true that not all changes are due to installing a new version of a module, so using a repo is important, IMHO.

"Perl module" repos tend to be specific to a particular product / team. We have a small number of approved builds of Perl that are in a "Perl binary" repo with a separate subdirectory per Perl build per platform. For example, we might have two approved builds of Perl 5.10.1, named perl-5.10.1-1 and perl-5.10.1-2, so we'll have a subdirectory for each supported platform for each of those builds.

Each team picks the build of Perl that they will use and so have "perl-5.10.1-2" in their project configuration files. So the team for the Fluff project ends up with fluff/bin/perl being a symbolic link to /site/perl/perl-5.10.1-2/bin/perl.

And that means that all the modules that get added to the fluff-perl-modules and fluff-perl-graphics and fluff-perl-mason module repos are installed using /site/fluff/bin/perl.

We will probably evaluate some tools for managing a stable of modules for the purpose of simplifying and/or automating:

Tracking which versions of which modules are currently in a specific module repo

Batching up the building of all those modules when a new platform needs to be supported

Predicting the impact of upgrading one or more of the modules (which other modules will also need to be updated)

I doubt we'll get to the point of automating the step of, when we have new version(s) of module(s) to add, "for each supported platform, go to an appropriate 'build host', check out the appropriate subdirectory, install the list of new module versions, test, check in".

And I suspect that step (2) (as very rarely as it happens) can require going to backpan to get the same version of the module as we're using on all of the existing platforms. We do that because upgrading a module version is much more likely to cause problems, IME, than upgrading Perl versions or switching platforms (now that I don't have to deal with AI/X, HP/UX, nor Solaris). And because we don't bother to archive the distribution tar balls that we downloaded (which is probably a mistake -- that's something that any tool attempting to address this problem space should be capable of doing automatically).

The Jenkins box has a minicpan that is only ever updated manually when necessary (when critical fixes arrive on CPAN or github, or when we inject our own internal modules).

Deployment for (pre-)production is done entirely through Debian packages, so we rely on prebuilt packages from the official repos, and repackage missing dependencies/newer versions.

On my dev box I have a perlbrew install, and I pull directly from public CPAN. We used to run on Etch, so I often had to downgrade my dev environment, but now we've finally switched to something more recent so we usually repackage newer versions instead of downgrading the dev environment.

The previous $client had a strict no-CPAN policy with the usual strange reasoning that "we can't trust those modules, we haven't written them", so basically we only had the core modules (from 5.8.something, no less!) + an old tarball of DBI + DateTime floating around on someone's hard drive, and no one could tell me how those got vetted (not that I was going to complain).

In my department we are transitioning to maintaining our own installation of Perl and its bevy of modules.

In the hopes of making this process simpler, we're making use of perlbrew. It feels quite promising so far. Our previous module setup was complex and has gotten messy since the person who maintained it left.

To install modules into my test environment, I've been using cpanm to install from public CPAN, but am using the older cpan shell in cases where I have to apply patches using distroprefs (which I just learned how to do).

We have a couple of environments that require a very limited set of modules, and we're looking at using Pinto for those.

Our transition is far from complete: one of the things we're trying to determine is the best place to put our in-house modules. My first impulse was just to hand-create a subdirectory under site-perl and simply plunk the modules there, but I'm now thinking I should query wiser, more experienced heads before doing so.

Went to join the gridlock to see it
Held an eclipse party
Watched a live feed
I cn"t see tge kwubosd to amswr thus
I tried to see it, but 8000 miles of rock got in the way
What eclipse?
Wanted to see it, but they wouldn't reschedule it
Read the book instead