How Would You Engineer A PEAR2/Pyrus Distribution Architecture?

Image via Wikipedia

I was recently accused on the Zend Framework Contributors mailing list of having “strong feelings” towards Pyrus (i.e. the PEAR Group’s Installer/Packager for PEAR2) and not in a positive way. It’s a fair description. PEAR is, putting it lightly, a very old architecture which makes it very resistant to change. With the idea of PEAR2 and Pyrus, I had hoped to see a renewal – the advancement of a PEAR architecture for the 21st Century. Instead, and this is just my opinion, PEAR2/Pyrus were a relatively simple iteration on a very old theme.

A Ranting We Shall Go

Now, I may be biased since I gave up on PEAR becoming PHP’s core distribution mechanism after I found myself using alternative strategies for hosting and deployment. This is not to say PEAR is not useful for everyone. It is – just not in my specific case when developing/testing/deploying applications. It still remains a good distribution means regardless by virtue of its ubiquitous installation with PHP.

I surprised even myself, however, with my vehement outcry over the idea of adopting Pyrus as Zend Framework 2′s package distribution method, lambasting both it and the PEAR concept of distribution in equal measures while piling up questions on Pyrus’ status (currently released in alpha) and suitability in the near term. That thread showed a fairly divided sentiment. Once I jokingly threatened to mow down my zombified colleagues with a minigun, I figured it was time to go forth and rant (miniguns are too expensive for these recessionary times).

If the PEAR ecosystem has a failing, it is one of staggered evolution. Over time it has picked up additional features tacked on top of a base model. The classic example is the use of Channels (to support multiple repositories) that has more recently prompted calls for the use of a Channel Aggregator to avoid the use cost in locally managing a channel registry or even hosting a Channel. This is the way of many PEAR features. They each do something incredibly useful but do it in a way that has many developers looking for a better approach – usually to discover the better approach requires breaking compatibility.

My vehemence in the afore mentioned mailing list was down to a simple case of disappointment. We all deal with PEAR because we have it, we know it, and have done so for years. Seeing PEAR2 and Pyrus take the incremental improvement route without apparently doing anything to change the core experience seemed…pointless. It improved a lot of what PEAR already did without actually doing very much different. All the same advantages, disadvantages, features and lack thereof were present and accounted for with a handful of nice headline changes (e.g. we now have package signing capability). What exactly was the purpose of rewriting the entire toolchain if not to seize the opportunity to answer the accusations of those who doubt PEAR is even relevant these days – by making it the single most relevant development in PHP today?

One Possible Path Forward

Since this is a brain dump post, as much to gather my own throughts in one place as anything else, feel free to call me bat shit crazy. There are days even I think that. Below I’ve raised what I perceive as problems in the PEAR/Pyrus system, obviously from a personal perspective, and possible solutions under the categories of Packaging, Distribution, Installation and Usage. I’ve tried to avoid getting into technical details – broad strokes will suffice for now. For your sanity, only the Packaging and Distribution areas are presented today. I will add a similar post for Installation and Usage later in the week. First one to mention “TL:DR” gets a minigun round to the head (will have to make do with throwing it at you until I can scrape more cash together for the hardware). To avoid any confusion, I use the terms PEAR and Pyrus to refer to the entire workflow from package generation to end usage for each respectively.

Packaging

The packaging of source code for PEAR is performed using the PEAR/Pyrus Installer coupled with a Package Definition (i.e. package.xml) to create a distributable archive file. Pyrus utilises a slightly more friendly Package Definition by also allowing for some elements of the definition to be defined in files other than package.xml (e.g. for setting up a changelog file or version numbers). The basic goal of this Package Definition is to have at least one XML file which tells PEAR/Pyrus which files to package, while role a file has (code/docs/tests), where each file goes in a relative filesystem, optionally the file’s MD5 hash, and a set of metadata like the package name, changelog, version, dependencies, etc. Using Pyrus offers the additional feature of being able to cryptographically sign packages, use a larger number of archive formats including PHAR, and bundle certain package dependencies internally.

Problems:

The main problem with the current Package Definition is that it often must be generated by a separate tool since it’s XML (it’s that thing everyone used before discovering YAML/JSON), and must explicitly list every file and piece of data within that format (with the exception of Pyrus which allows specifically formatted files to carry version and changelog information among other nuggets) optionally with each file’s digest hash. Even the Pyrus improvements still require specific files using specific formatted text and/or file names. Using XML just ends up imposing extra work to maintain package details unless you are lucky enough to have a small stable enough package.xml that it can be manually maintained rather then persistently needing generation. A minor aesthetic detail is that XML is harder to read.

Secondly, packages are therefore bound to their archiving restraints. Since package.xml generation is tied to a secondary process, installing from source code may not be feasible whether performed on a local git clone or similarly automated from a remote source where the remote package.xml may well be out of sync with the actual source code or where it may not even exist.

Possible Solutions:

The one solution that keeps occuring to me is to simply make a Package Definition programmable, i.e. a small consumable low-maintenance PHP script. Using native PHP, one can create ether a generic array, or a newfangled closure, which can be executed through PHP to populate all the necessary data for a Package Definition for consumption by a package installer.

Since I’ve dabbled a bit, here’s what such a Package Definition could look like:

I’ll assume PHP 5.4 will have some sort of short array notation to cut down the array size. Well, let’s hope so . Would be nice to reduce the line count more. Yes, I did indeed borrow the idea from elsewhere .

This has a few advantages. No XML to maintain. No need to keep an XML Package Definition synced up for every file change in a VCS. No need for secondary XML generation tools or build tool plugins. Supports downloading files from remote herarchical sources and not just archives (including any VCS source). Developers are already used to versioning build scripts from tools like Phing (just not the end products which are usually ignored whereas package.xml is not). Being plain old PHP, it can be just as complex or as minimal as you want and anyone with basic PHP knowledge can write one.

One can still generate signable archive files using this approach – the point is to increase the kind of installation sources that can be used rather than replace existing ones. In place of signable packages, for those requiring the security, other package files could be limited to download over HTTPS. For example, Github offers git read-only access via HTTPS for all repositories as standard.

Distribution

In order to distribute source code using PEAR/Pyrus, you need to make use of either a PEAR Channel or a standalone archive download (i.e. a downloadable tarball). A Channel is basically a whole bunch of XML files served up for access as a REST API. Using a Channel, you can upload packages to the Channel host, update the XML files, and publicise your Channel URI so users can discover your Channel and install your packages.

Problems:

PEAR (PHP Extension and Application Repository) was originally founded to serve as a central package distribution channel. For various real and imagined reasons, the concept of a central repository did not succeed in PHP and instead developers insisted on using alternative means. This was aggravated even further by the arrival of frameworks like Zend Framework offering discrete components not originally served over a PEAR Channel at all. PEAR Channels were introduced to allow anyone host their own distinct PEAR Channel as one of those means.

The PEAR Installer has only ever shipped with the main PEAR Channels pre-registered. All other Channels needed to be manually located before use – usually by referring to the packaged library’s documentation. Since all Channels are independent entities, there is no global lookup point for querying package details, dependencies and availability. There is also no scope for true package name uniqueness (technically this is accomplished by requiring all packages (except core-PEAR ones) are prefixed with a Channel alias term, e.g. mychannel/MyPackage).

The generation of the REST API, which was the backbone of a Channel, was also complex (the release of Pirum by Fabien Potencier has gone a long way towards simplifying this). Obviously, Channels are also tied to the concept of archive packages and cannot operate directly with a VCS like git. There is a workaround possible for Github using Github Pages to host the REST API.

As alluded to, the REST API is itself a complex graph of XML files that requires a generation tool to manage initial setup and package updates.

Possible Solutions:

The best concept to gain early traction was that of a Channel Aggregator expressed by Stuart Herbert. Sadly, I haven’t seen much more action on that front. In commenting on that idea, I considered it a move towards a decentralised distributed Channel mechanism (mouthful of gibberish, I know!). Here’s a couple of thoughts on how this could work:

The players would include a Package Authority, a Channel Aggregator (any number of them), and Channels (optional).

The Package Authority would be a centralised location basically for reserving package names and ensuring there is a point of reference and authority to prevent package name duplication and to manage ownership of such. It’s possible this could also be developed with additional purposes but let’s keep it simple. This would help, primarily, in removing the need for Channel prefixes on package names and preventing package name confusion. For security reasons, the Package Authority would associate a package name to a specific URI representing a download source (e.g. a PEAR Channel or Git URI)

Channel Aggregators are the more complex beasts. They may be utilised by Channel operators to distribute Package metadata to end-users on demand. The Aggregator would track available packages at source, their basic details, their available versions, and information on the location of host Channels, version control systems, and Package URIs and so forth. In effect, the Aggregator might well replace Channels for many purposes – and potentially eliminate one more source of work in distributing source code using PEAR/Pyrus.

The ideal scenario here is that any PEAR/Pyrus Installer would pre-register a couple of well-maintained Aggregators saving the users and package distributors the annoyance of dealing with Channels altogether. Hence, we’re back to a core Channel of sorts but with control of package/source hosting decentralised to individual developers. Again, Aggregators could easily repurpose themselves as package hosters if they wish (such as Pearfarm are doing) though this would be entirely optional.

Channels, as suggested, could well be optional. Use an Aggregator instead and register either a package URI, git repository, or anything else so long as it lets you download the package files (and the PHP programmable Package Definition ). Painless hosting? Maybe.

I will point out this would require at least one authentication in the system. You’d need a Package Authority account to allow for reserving a package name and perhaps transferring it between maintainers. The Aggregator may operate without authentication since it acts much like any aggregator based on your source data (and one would hope a few simple crosschecks with the Package Authority to ensure it’s not unwittingly aggregating false data from the hackers . Package/source hosters could ping the Aggregator as a hint to update its date in a more timely manner.

I won’t touch the issue of who gets to reserve the package name “DB”. The Package Authority may need to enforce specific rules against overly generic names on a common sense basis.

I think that’s enough for a Monday read (you’ll all need enough brain capacity to finish out the week!). Feedback is, as usual, welcome. If anyone has a pre-existing solution or one in planning along these or similar lines, drop a comment!

After skimming through the post (sorry but monday is busier than fridays;), I’ll try to comment without saying stupid things:

That’s more or less the way composer (which you can find on github https://github.com/composer) works, or at least plans to work. It’s still not finished, but the goal there is that composer is the installer, and packagist is (one) package repository. People can register their packages to the repository, if they want, but and there the maintainer management/authorization would be handled. If a specific project, like say Drupal would want to use composer to install drupal plugins, they could set up their own instance of packagist and embed composer in their deployment, which would be a configured version including the drupal repository by default. That way users can easily install packages, and there are no risks of hijacking. However, you still have the possibility to specify that this one git repository should be used for package X, which can even be a local repository.

This is definitely more flexible, and I hope we can bring it to full speed soon enough. Feel free to join and help (but you know that already:)).

http://blog.astrumfutura.com Pádraic Brady

Ideally this is why I’m posting – seeing what is going out there, how people are working on solutions and building new code for those solutions. Hopefully adoption will follow in time

Anonymous

Well, a couple things.

First off, I like a lot of what you mentioned here. Luckily, pyrus is just alpha. I’m not sure, but why don’t you get involved to stear the project into another direction or at least make a difference? Pyrus lacks contributors, before anything else. I think Brett & co would be really happy if people stepped up and got into it.I think changes can be made and done.
Then, I don’t think you realize how easy package.xml generation is with pyrus. A lot of people hate XML with a passion. Be that as it may, Pyrus really doesn’t force you to write any XML.
Also, please fork:https://github.com/pear2/PEAR2_Pyrus :-)I’ll have to re-read your post later. Mondays, etc..

Brett Bieber

Just a few notes about Pyrus, the toolset supports a programmatic API to the package.xml. The manifest can be created and updated automatically — no one should be generating the package manifest file by hand, unless they enjoy pain.

So theoretically, would it be possible for someone to use Pyrus to install a package cloned from git which only offered the programmed package script?

As a use-case, let’s take git as an ecosystem (and stretch it a wee bit). Projects may offer git tags as a reference for recommended installs, however some code is persistently in development and may be infrequently tagged but updated often. It’s unlikely the author wants to generate a new package.xml for each changeset they push to git in such circumstances even though the user base may be either brave or foolish enough (if not simply told) to rely on the master branch.

That’s the objective I’m pressing in the post above – whether the XML
bit is skipped altogether, or generated on the fly by the installer for
the purposes of installation (indirect but let’s assume one wanted to
keep the installer changes minimal), bypassing the actual need to keep an updated package.xml on tap at all times can simplify the usage.

http://blog.astrumfutura.com Pádraic Brady

As I said, XML being ugly is a minor aesthetic detail and not really my point. See my comment to Brett above for the direction I’m looking in. The way I see it, if we’re going to assume some form of package.xml generation, then why not make that generation script the foundation of the package? Sticking it off to one side in an option toolset make it easy to overlook when it could neatly slot into a more central role, remove a degree of work, and make distributing non-packaged source code that bit easier and convenient.

Put another way, one of the obscure rules I always like to follow with a VCS is to commit my build script but ignore the generated results of that script – let the user/package process worry about the end products. Using a package.xml turns that on its head requiring the product of a build process to be updated if you want any one VCS commit to be an accurate snapshot.

Brett Bieber

I think that’s a great idea. Right now it’s two commands, pyrus make && pyrus install, but it shouldn’t take much to combine the two.

http://xlogon.net/milki mario

PEAR also suffered from a lack of reverse engineering. The channel server interface was never documented, hence no alternative implementations. And PEAR2 doesn’t seem to bother there either (the whole website seems optimized to be uninformative).

Can’t say I blame anyone for that – having comprehensive documentation for the purposes of implementation assumes anyone would want to. That said, I picked a Closure as a package definition because it could remain free of any class reference (probably the ONLY reason for picking it actually). I even started writing a small specification on how to interpret/use it – maybe I’ll release a draft of that for the sake of completeness (it is extremely rough though ). Will see about putting it up tomorrow as a reference. I put it in the standard RFC XML format so it should be easy to generate a publishable HTML version.

http://weirdan.livejournal.com Bruce Weirdan

The problem I see with PEAR is that it’s largely superfluous – why would I need PEAR when I already got apt + deb?

Assuming there’s actually a need for yet another package manager… The problem I see with your programmatic approach is that I, for one, would like manifest to be passive – that is, I want to be able to unpack the package, verify that it contains the files the packager intended it to contain and verify their integrity *without* running any code you supplied. I want to process it with *my* tools, not yours. package.xml, being the tiny limited declarative (yet ugly) language fits this purpose just fine. Run untrusted code just to read package description? Crazy idea.

For you, the solution might be to put package.xml into .gitignore and add a script to generate it (makefile is ok). Those who trust you would run it, those who don’t – will rely on those who do (or will run it in locked down environment and then process (after review) passive manifest).

Why do you want to misuse VCS as distribution system anyway? It’s hardly suitable for this (especially git since by default it fetches the entire project history). Put a torrent out there, it will save you some bandwidth. By the way, apt has torrent transport – which gets us back to PEAR’s unnecessity…

Tarjei Huse

Hmm, I think the main ting that is missing at the moment from PEAR is the idea of separate environments. I.e. it is very hard to keep multiple installations of PEAR in separate directories – for example for testing upgrades or for checking into a VC system so that you present a stable environment for developers.

I hope Pyrus will be better in this regard – it seems headed in the correct direction.

http://blog.astrumfutura.com Pádraic Brady

It’s a larger topic – which is why I stuck to the broad strokes for each idea (each could go into a whole series of blog posts if you wanted to). So a few fleshed out points:

1. If the programmable package definition is untrusted code, then why the heck are you installing the package at all? It probably will contain dozens of other PHP files which might be out to get you. Let’s be fair in our treatment of files from the same package and source!

2. The manifest is the product of a build process, and you can have a whole argument over why versioning generated components is a good/bad idea. The main point I make is that for any one commit to a VCS to be inherently complete, such a manifest would also need to be updated – never going to happen if it is a generated component. Obviously, opinions here will vary across PHP developers.

3. The current package.xml doesn’t guarantee file integrity. The file md5sum attribute in package.xml syntax is optional. Even the package.xml itself is not subject to any integrity check so the file listing may be suspect regardless. Pyrus alleviates this through the use of package signing so you can at least have some assurance as to the source and integrity of the package. If using git, git obviously knows what files exist and it can be accessed using SSL over HTTPS (offered by Github for example) so the source is verifiable.

4. On misusing a VCS as a distribution tool. Welcome to Gitland. I’d wager that more code is distributed over git than is distributed over all the PEAR channels on the planet. No manifest or package definition. No signing of code. No archives. Nada. Using a programmable package definition that can be cloned from git alongside the code simply allows those using this approach to skip over the package grabbing stage currently necessary to Pyrus while still being capable of installation through Pyrus (with any advantages that offers in terms of dependency management and such).

http://weirdan.livejournal.com Bruce Weirdan

> If the programmable package definition is untrusted code, then why the heck are you installing the package at all?
Who said I was going to install it? It might be that I’m just generating a list of packages for others (or myself) to view.

> The main point I make is that for any one commit to a VCS to be inherently complete [...]
This is the point where we seems to have radically different views. I consider commits to be inherently incomplete – if package hasn’t built it’s not ready to be installed. It’s a code, not a software. Which gets us to the next point…

> I’d wager that more code is distributed over git than is distributed over all the PEAR channels on the planet.
Code is irrelevant. The amount of code does not matter in the slightest. What matters is usable software. And here, I bet, currently git has no chance beating even apt (let alone apt and yum combined).

Most PHP software is distributed in tarballs though. Then unpacked, configured and ftp’d to a cheap shared hosting. Never updated after that. This is what needs to be changed.

http://blog.astrumfutura.com Pádraic Brady

Okay, so validate the package definition. Tokenise it, scan function/method occurances against a whitelist, validate any paths, and run like hell if the validation fails . The base format wouldn’t need a ton of PHP. The main ones are relative paths to the files to install and occurances of file_get_contents(). Or would that lose my bat shit crazy label?

It’s not radically different. The build process I’m speaking of, in the context of PEAR packages, relate to libraries needing a generated package.xml. It’s not unusual for me to throw in a bunch of commits and stop working on a library for a few weeks if busy – in those cases, there are still people downloading and using it live across PEAR. More fun will be had with framework bundles/modules now that Symfony 2 has set them loose.

I won’t take THAT bet . If you were, however, to survey how developers deploy non core libraries (i.e. the PHP stuff they get through yum/apt setting up servers), I’d think you’d be surprised how much is simply exported or cloned from a remote or locally reflected VCS straight onto the filesystem.

Good luck with changing shared host practices. Pyrus does make this easier though, I believe, with better support for multiple local repositories.

http://weirdan.livejournal.com Bruce Weirdan

> Okay, so validate the package definition. Tokenise it, scan function/method occurances against a whitelist, validate any paths, and run like hell if the validation fails .
If we follow that route we would end up with a safe subset language, much like JSON is a subset to Javascript. Then interpreters would need to be written, leaving eval(join(”, file($filename))) for brave souls. I’m not entirely opposed to that, however I don’t think PHP is a good base language for that. You could take YAML and extend it instead. Moreover, with some extensions (like accepting file patterns instead of fixed paths) package.xml might suffice as well.> It’s not unusual for me to throw in a bunch of commits and stop working on a library for a few weeks if busy – in those cases, there are still people downloading and using it live across PEAR.Then I would assume the library to be in a state of flux and wouldn’t use it until you package it. Or I will package it myself, taking responsibility for the results. From my POV (when I have to wear sysadmin’s hat) the packager assumes responsibility for the package to be in a working state. I do not expect you to keep trunk or master branch usable at all times – this is why software projects have releases. And packaging is usually minor overhead compared to the release itself.> If you were, however, to survey how developers deploy non core libraries (i.e. the PHP stuff they get through yum/apt setting up servers), I’d think you’d be surprised how much is simply exported or cloned from a remote or locally reflected VCS straight onto the filesystem.More people (not all, for those systems have the same problem with separate repositories PEAR used to have) would use system packaging tools if more vendors provided packages in usable format. Why don’t you package your libraries as .deb’s?I suspect download + extract + tinker + upload would still win that race. Some could commit the code to their own VCS (if they have one, that is). Take, for example, svn users – do you think most of them are aware of svn:externals? Or proper procedures for tracking a vendor branch? Unlikely.

http://www.facebook.com/stevan.goode Stevan ‘Steviepops’ Goode

I like some of the ideas thrown out there in the comments and the main post. I’m not a great follower of what’s happening in the world of PEAR and so that may help or hinder my view. Here’s my thoughts on a system so far.

We like the idea of multiple, distributed code repositories but also we like to have a standard, central repository that holds the ‘most common’ packages.

- Could we not borrow ideas from the existing, working, package managers out there, like APT? Have a central, controlled repository that ships with the system, an ‘unstable’ repository that’s commented out in the initial config and allow people to add their own repositories. The repositories could maintain a list of pacakges and versions loaded in from the packages metadata files and send them off to the client on request.

We also like to be able to install in specific locations if we want to, or just install system wide if it makes more sense to do that.

- When installing have a default ‘system’ directory as well as allowing the user to override this with a command line argument (e.g. -d without an argument to install in the current directory, -D with an argument to install in a specific directory, neither to install in the config specified directory).

We want to have a way to add meta-data to a project but in code that doesn’t execute arbitrarily and is easy to read/edit.

- Use an INI file format? So it’s a file format that’s been aroud for a while but it does what we need and already has the right functions in PHP for parsing. Have a script that helps devs to create the INI file that we want to make their life a little bit easier.

In the case of ‘uncontrolled’ repositories, devs could put in the code and have the repository meta-data refresh automatically based on cron, hooks, etc or simply update it in their meta-data file that they submit to the VCS.

Using VCS as the primary distribution method is dangerous. Although in theory a dev should only commit code once it is working and the task they set out to perform is complete, in the real world a large number of people submit half-baked code riddled with issues before testing.

http://twitter.com/CloCkWeRX Daniel O’Connor

I’ve seen multiple discussions which talk about wanting more channels pre-registered. I’ve stuck in https://github.com/pear2/PEAR2_Pyrus/issues/33 for now – patches more than welcome

Brett Bieber

Pyrus supports multiple installations out-of-the-box. Using a local ./vendor/ directory for application dependencies is actually the “recommended” way to use pyrus.

I like the idea of providing a command to automatically register all known channels. I do NOT think it should be automatic because no verification of the code these channels is distributing is ever done to ensure that it is not malicious, and there is no good reason to tacitly endorse code we’ve never looked at.

I also like the idea of a one-step command to install from a checkout, but I think a better way is to put this in your development toolchain:

There are several factual errors in your post. I might add that you know I wrote this stuff, why didn’t you email first to get comments/fact-check? Forgive me if I am pissed, but your amateur journalism is personally offensive as well as indefensible from an objective standpoint. Please fix these points ASAP:

Pirum was not the first standalone channel generator, Pyrus’s built-in developer commands was. It simply lacks the PR finesse that Fabien is so brilliant at, and the frontend is not bundled, but is a separate package.

Channels CAN operate in a VCS. pear2.php.net is in fact 100% VCS-based (subversion, but it’s the same idea as git).

“What exactly was the purpose of rewriting the entire toolchain if not to seize the opportunity to answer the accusations of those who doubt PEAR is even relevant these days – by making it the single most relevant development in PHP today?” ‘we’, ‘oops’ => ‘used it twice’)

simply ignores the first ‘oops’, which can cause an extraordinarily subtle logic bug. The PEAR installer had thousands of lines of code to validate package.xml, and still more than 20% of existing PECL packages have horrendous bugs in their package.xml that slipped by without notice (Pyrus uses XSD and cuts out thousands of LOC and is much faster). XML is a great standard for correctness.

For ease of development and human readability, ini or yaml wins, hands down. But who wants to hand-edit a package definition file anyways??? I haven’t edited one for years. I wrote the first package management tool, PEAR_PackageFileManager to make this never necessary again, and pyrus makes even writing PHP code unnecessary for all but the most complex packages, by editing the text files that one should have anyways.

I think that Pyrus could compare the timestamp of a package generation script with the package.xml and ask a user if he/she wanted to update it (and provide a diff of the files that would be added) on install. Alternatively, the package.xml could be updated to have a wildcard format so that installation from vcs would install whatever’s there, and the package.xml would be fleshed out when packaging. There is always an inherent risk that something will be flubbed (happens ALL the time with automated scripts) so eyeballing package.xml before distribution is always a good idea. I would want the new wildcard thing to be a new filename, perhaps in one of your fave formats.

As for the user who is interested in making .deb files, the Debian maintainers use PEAR to create these files, and the same is true of redhat and .rpm files for PEAR packages. It actually makes their job a lot easier.

As for not requiring a local registry of channels and trusting an external channel aggregator, the scenario you described has many security implications which you need to think about. EVERY package management program requires manually entering the URLs of package repositories. It is very dangerous to remove this.

Most of the package managers have a central repository that contains absolutely everything because they are operating system package managers. PEAR is not. We have channels (which I introduced in PEAR version 1.4 a LONG time ago specifically to match the distributed nature of the PHP ecosystem.

The fact of the matter is that there are different levels of commitment. For casual developers, they are not going to need the versioning and verification that the PEAR installer provides because they are working alone. These users can just pull the source from git and put it live on the site. Hell, I do this for some of my teeny projects.

The next step up is a large system with many users editing code. In this scenario, we want versioning and verification so we can ensure the remote code is the code we are expecting and is from the author we are expecting.The same is true on the developer end. You are either writing for hobbyists, in which case you don’t need PEAR, or you are writing for serious users, in which case you need SOMETHING to verify the source and author. I personally hate Zend Framework because the ideology of the framework is designed such that you either download the entire thing at once, or have to do a LOT of work by hand to manually install a single package and its dependencies. This is ridiculous, and borderline criminal. I still use ZF for the GCal stuff, but I hold my nose every time.

As for Pyrus itself, the reason the thing is in alpha is ONLY because we want to be able to add new features that users who have been in a coma for the past 4 years like you might wake up one day and suggest before we freeze the API. The code is fully tested AND documented both in terms of usage and down to the smallest API.

Padraic, you are a serious developer, so I was honestly surprised at this post. You’ve known about Pyrus for years, and had plenty of time to provide useful input and your first input is a public, poorly researched rant? Come on, you can do better than that.As for me, I am no longer an active developer on Pyrus or anything open source because that energy is now going to raising my daughter, so I will leave the project to be what it is, but I hope that you and others will realize the true potential that is built into Pyrus and will provide the missing things that they need either as plugins to Pyrus or patch the source code directly. Pyrus is the most robust dependency management and installation system you can find. The things you are describing as major problems are primarily cosmetic and VERY easily modified.

For instance: you could easily write a package definition parser for your custom php-based thingy which would accept a PEAR2PyrusPackageFile object and your closure would work. That would be about 30 LOC. Writing a json parser that spits out a PEARPyrusPackageFilev2 is more complicated, but also not super difficult, probably along the order of 500 LOC. Modifying the command-line to look for these things is about 3-10 LOC. Adding a command to parse the xbel at pear.php.net of all known channels and registering them is probably 20 LOC plus another 15 LOC in the XML file that defines commands inside Pyrus and could be implemented as a plugin as well as in the core.

Ciao

http://twitter.com/gggeek Gaetano Giunta

From my limited experience in trying to package a largish cms app using pear (we stil support older versions of php so going pyrus only is not yet feasible), the problems are:

- documentation of the package format is oriented heavily to users using the pear tools for package creation. Since I create the package xml via my own build scripts, I just bypass any pear code at that step and build the package by myself. The problem is the docs are not enough for this. And discouraging alternative tooling implementations is not a good idea for the format to increase adoption. Think about widely used formats such as eg. zip: the format is well known and documented, the implementations abound. Documenting the tool instead of the format is a bad idea – and turning the xml package description into an executable is a bad idea too because it goes in the same direction

- limited support for installing many copies of the same app in different directories (I’m told this is ok with pyrus though)

- documentation for extended tasks is very poor – and I need a lot of them to trigger eg. the setup wizard of the cms after deploying it – in fact the cms has already a packaging system built in, so all I want is a package-in-package possibility + the ability to start the cms package installation script at the end of the pear package unzipping

- no support for uninstallation tasks. how can I trigger removal of db structures when uninstalling extensions? Iirc upgrade tasks are also lacking

- more complex dependencies support (eg I need oci or mysql or mysqli or postgres, and package-x or package-y as prerequisites)

My 2c
Gaetano

http://blog.astrumfutura.com Pádraic Brady

I was definitely not expecting that comment. You win. I won’t criticise Pyrus in my next blog post.

I was being sarcastic. Sorry. Am I supposed to cry now and run away?

In the meantime, you’ll be happy to hear I’ve arrived at the conclusion that you really really don’t want anyone to open their mouths criticising PEAR. I find this attitude abhorrent and will respond in my own rarely expressed way in the next few days. I suggest purchasing a flame retardant suit because I sure as Hell couldn’t give a damn what I end up writing.

http://weirdan.livejournal.com Bruce Weirdan

> The fact is that most who hurl accusations at PEAR have no interest in making PEAR relevant at all, don’t know anything about the tool itself, and couldn’t care less.
Bad, bad users. How dare they?

> As for the user who is interested in making .deb files, the Debian maintainers use PEAR to create these files, and the same is true of redhat and .rpm files for PEAR packages. It actually makes their job a lot easier.
Sure they do when they are re-packaging PEAR stuff. I have a question though – for new software (that hasn’t been packaged for any distribution system yet) would one go with pear -> .deb packaging or would it be easier to go straight to debs skipping PEAR altogether? I’d think the latter is the case.

http://www.facebook.com/greg.beaver Gregory Beaver

My point about users is simply that PEAR caters to its users, not its haters. Padraic counts as a user, otherwise I wouldn’t bother responding at all.

As for packaging as a deb, it would make sense for simple packages that don’t use any special tasks or other PEAR customizations. However, for most packages, there are some complexities that prevent simple .deb packaging. In addition, any custom package installed using the PEAR installer would not detect the .deb-installed package unless debian also manipulated the registry somehow. This would duplicate functionality of PEAR, so ultimately I think the advantages are balanced by disadvantages.

http://www.facebook.com/greg.beaver Gregory Beaver

?? Perhaps I would have responded differently if you had ever expressed ANY criticism or ideas during the time that other users of PEAR (such as Derick Rethans, who provided a lot of the great new ideas in Pyrus) were answering the very public call for ideas. Instead, simply stating – as if it is a fact – that no regard was made for suggestions during development and other blatant falsehoods does nothing to advance either PEAR, PHP or your personal causes. It simply (as I said) pisses me off. I and others devoted thousands of hours of unpaid personal time to make Pyrus and PHP itself (the phar extension, for instance) respond to the needs of users. I did this explicitly because I like helping others make their lives easier in coding PHP (all of these projects are kind of meta-projects since they all support using PHP for web, rather than ).

Also, as I said, your ideas are good, but are hardly the kind of things that would require a rewrite of Pyrus, they would be relatively trivial additions to the existing architecture. To be explicit: I would expect a blog post with this kind of vehemence AFTER you have already expressed these concerns on the development mailing list that PEAR and Pyrus developers read. Once they have ignored you, by all means, blast away.

Of course, I do feel differently for any project that puts its code first. In other words, Pirum has every right to criticize PEAR’s channel server in its docs for being insufficient because it provides an alternative that is fully functional. This is the best way to criticize in open source:

1) provide patches
-or-
2) provide an alternative solution

I feel the same way about the new docblox auto-documentor, which provides a BEAUTIFULLY written alternative to phpDocumentor, even though it is not yet finished. It actually implements everything I had hoped would happen in a rewrite of phpDocumentor. In its docs, it criticizes the shortcomings of phpDocumentor, and has every right to do so.

An even better example is the PHK project, which vehemently criticised the phar extension, but provided an alternative. I happened to think the alternative was not the best way to solve the problem overall, but the criticisms of phar ultimately led to a much better phar extension in the end.

However, I feel strongly that text-only criticisms should be backed up with some common courtesies such as getting all criticisms stated as fact correct and contacting developers of the project you’re criticising first.

I welcome criticism of PEAR, why do you think I rewrote the damn thing? It was based on years of seeing the same problems crop up over and over again on the mailing lists and in bug reports. During development, several calls for suggestions and emails from people using PEAR were implemented, and the biggest criticisms (as judged by the frequency of their being voiced) were addressed:

All of the other concerns were voiced by one or two developers, or were simply ideas one of us had such as the ability to create a bundleable PEAR install that could go in a VCS without messing up the registry. In addition, the memory footprint and performance of Pyrus is significantly better (this was also a relatively common complaint with power users).

More pointedly: PECL users DID complain that package.xml was too difficult to handle, and so I wrote the “pickle” command, which automatically creates a package.xml at package time. This is very similar to what you’re requesting for install-time from source, and as I said before, it would have been implemented had ANYONE including you suggested it with the clarity you have here during the development time before. My only complaint is that you imply that it WAS suggested and ignored, which is patently false. It’s not too late to implement anything in Pyrus, either, which is my other big point. If people only ever complain about it, there is no chance that things will be implemented. It would be nice if users from ZF and Symfony2 jumped over to take a serious look at what Pyrus can do and poke at it and fix it. I know the Symfony community has been looking seriously at channels and PEAR for years.

Also, as I said, I’m no longer an active developer, so my views are my own, and reflect years of patiently listening to valid criticisms of PEAR and having finally addressed them in Pyrus, years of silence and yawns from the php community. To suddenly hear the first public mention of Pyrus as “it sucks” is more than a little frustrating since the truth is that it DOESN’T suck. It needs some loving to become what you want it to be, but it is a culmination of years of effort and does some amazing things and is flexible enough to do a lot more. I trust you no longer believe the acidic comment you made about my abhorrent attitude. I think a better phrase to describe my attitude is “strongly opinionated” as those working on the PSR-0 namespace standard learned the hard way.

P.S. this blog’s comment system inexplicably adding two closing “whatever” tags to my first comment and I can’t seem to erase them, not sure what is going on there, but that is not intended.

P.P.S. when I talk about catering to users and not haters, I like to think about frameworks. I hate pre-built frameworks in PHP because I can write my own model/controllers and views much faster than the time it takes to learn someone else’s system. Thus, I don’t have any criticisms of frameworks because I am a framework hater, not a framework user. There is nothing that ZF, Symfony, Cake, or any variant could do to make me change my mind on the issue, so they should not cater to me. I feel the same way about haters of PEAR who will simply never use it. There is no point in trying to bend PEAR into pretzels to make every user happy. Instead, it should do what is possible within what those who would use it are looking for, such as what you are looking for. Incidentally, I do use a framework in javascript, because it makes things much easier (standardizing browser differences, for example), so I’m not anti-pre-built-framework on principle, just in PHP for my own projects. Not that this matters

http://blog.astrumfutura.com Pádraic Brady

After reading your comment with some safety time in between to allow my blood pressure come down, I think you need your own time off. You have made allegations without evidence, misinterpreted obviously correct statements, insulted me in various ways both direct and by association with generically insulting statements, and engaged in the sort of FUDish arguments I can only heap disdain on.

You mention a claim about lacking documentation. Where did I say that? You complain that you can host stuff over SVN, yet I specifically mentioned git. You accuse me of calling Pirum the first channel generator when I said nothing of the sort – only that it vastly improved the situation. Is this the sort of jumbled response you wish to make from a position of authority? And that’s before the personal insults!

You infer that the system I noted for channels was insecure and ignore the existence of parallel systems elsewhere using a single repository without code review instead choosing a comparison to OS repos. That’s plain old FUD – an carefully selected excuse where you use general statements and do not engage in the debate with specifics. The same bias you note elsewhere moaning that it might result in “tacit endorsement” of malicious code – a stupendously worthless argument in the context of an agnostic protocol. Does Github endorse all the code it hosts? Of course it doesn’t. Does HTTP? FTP?

Your comment is a marvellously flawed work of your imagination ignoring both what I actually wrote and the limitations expected of part 1 of a 2 part series – and it is absolutely dispicable. I stand by this article without hesitation. It contains no inaccuracies, no personally directed insults and represents my personal opinion which, after consideration, remains unchanged. I consider your post to be a final fruit of what is whispered in corners of the PHP community – that PEAR is stagnating and that there are those in positions of authority openly hostile and critical of anyone who dissents from their view of how PEAR should work. Your comment beautifully demonstrates the truth of their words and I almost regret ignoring their advice in persuing this course of action. Congratulations on furthering the reputation of PEAR in the worst manner possible for an open source project.

Take your vitriolic lies elsewhere. I suggest your own blog where I will be happy to engage you from high orbit with kinetic missiles launched from this blog at any time you wish to attack my reputation in public. If you have obtained a saving grace it is that, as you are an inactive open source developer, I cannot publicly pin you to a wall as one of the more serious flaws in PEAR.

http://twitter.com/nihiliad David Naughton

> …no verification of the code these channels is distributing is ever done to ensure that it is not malicious, and there is no good reason to tacitly endorse code we’ve never looked at.

No other major-language package-management system that I’m aware of imposes this inspection-by-a-committee-of-gatekeepers entry barrier. Certainly doesn’t exist in Perl, Python, or Ruby. I suspect that’s one reason why there’s so much more code in any one of CPAN, PyPI, or RubyGems than in PEAR.

Quality control is nice to have, but must it be an entry barrier? Why not provide quality indicators after the fact? There are multiple ways to do it, many of which can be automated: test-coverage and doc-coverage metrics, coding activity levels, number of open issues, download statistics, user-ratings, etc. Of course it’s possible to game some of those, if they’re implemented naively. There’s lots of prior art in recommendation systems for mitigating that, though.

Brett Bieber

>> …no verification of the code these channels is distributing is ever done to ensure that it is not malicious, and there is no good reason to tacitly endorse code we’ve never looked at.

> No other major-language package-management system that I’m aware of imposes this inspection-by-a-committee-of-gatekeepers entry barrier. > Certainly doesn’t exist in Perl, Python, or Ruby. I suspect that’s one reason why there’s so much more code in any one of CPAN, PyPI, or RubyGems than in PEAR.

> Quality control is nice to have, but must it be an entry barrier?

What barriers are we referring to here? There are many PEAR packages that you can install outside of the original PEAR repository. PEAR channels have been popping up all over the place.

http://blog.astrumfutura.com Pádraic Brady

David was responding to Gregory’s assertion that an Aggregation system, if implemented, would mean the operator would be tacitly endorsing all code distributed over that system. That is, of course, FUD – there are many distribution mechanisms which don’t review all code and are not assumed to tacitly endorse anything. Similar to the FUD of comparisons to an OS level repository and implying there would be security problems if all channels were merged via an Aggregator.

These systems are used in Ruby, Python, and Perl without the world ending.

Brett Bieber

Yeah, I agree that the world wouldn’t end.

I know a lot of hosting providers that would certainly want some control
over which channels were “supported” or authorized for installs. Security is a big concern since installers such as PEAR are integrated into quite a few CPANEL-like interfaces. I think some mechanism of security needs to be taken into consideration to support that use-case.

http://blog.astrumfutura.com Pádraic Brady

Some leverage could be built in to implement a whitelist where necessary, for example denying the listing of an Aggregator and only the core PEAR channel (the system I describe assumes a Channel or Aggregator can each be registered/removed as necessary). That would resolve their concerns though what users may do on their locally writable filesystems may not please them then .

http://www.facebook.com/greg.beaver Gregory Beaver

OK, let’s back up a bit here. I’ll try to keep this brief so there is no confusion at all.

2) “You complain that you can host stuff over SVN, yet I specifically mentioned git
The actual words I said were: “Channels CAN operate in a VCS. pear2.php.net is in fact 100% VCS-based (subversion, but it’s the same idea as git).” which as you can see is quite clearly mentioning both SVN and git in the same breath. Unless I’m not understanding you, storing a channel’s information in git is no different than storing it in subversion, except you’ll have a hidden .git directory instead of a .svn one.

3) “You accuse me of calling Pirum the first channel generator when I said nothing of the sort – only that it vastly improved the situation”
I did misread your words here, my mistake, apologies for the incorrect accusation.

4) “You infer that the system I noted for channels was insecure and ignore the existence of parallel systems elsewhere using a single repository without code review instead choosing a comparison to OS repos.”
OK, then let’s look at non-OS package managers. I may be misinformed, but here is my understanding of how the big ones most similar to PEAR work:
gems -> rubygems.org hosts all gems
pip -> docs say “pip searches http://pypi.python.org/pypi by default but alternative indexes can be searched by using the –index flag”
npm -> by default uses http://registry.npmjs.org/ and is configurable
cpan -> installs from cpan, CPAN::Mini::Inject allows a local mirror of your own modules.

All of these package managers control the website that distributes the code on the website. PEAR could set up a single clearing house website (which is different from a channel aggregator) but that is basically what pear.php.net is. If one changed the rules for code submissions at pear.php.net, it would be no different from the other package managers.

So, let me try again: you suggested

a) “Channels, as suggested, could well be optional. Use an Aggregator instead and register either a package URI, git repository, or anything else so long as it lets you download the package files (and the PHP programmable Package Definition ). Painless hosting? Maybe.”
This would mean that users could type in:
pyrus install Log
expecting to get pear’s log package and instead get a completely different one because the aggregator (or package authority) decided to change where the Log package came from. This, as I suggested, has security implications that are dangerous. Why does this matter? An example: one of the major consumers and users of PEAR channels internally is the state of Iowa. They use it to manage websites such as the public sex offender’s registry. You can be sure they will not like the idea of hiding the source of a package for obvious reasons.

The best solution (which is also one you suggest) is providing a killer search feature, so that one could instead typepyrus find Log
and quickly get a list of packages that should be considered for installation, sorted by relevance (like google for pear). Ultimately, the problem is the ability to find packages, no? Sure, this is one extra step, but not a huge impediment the way things are now where one is up a creek without a paddle if you don’t remember which channel a package comes from.

Let me point out a few more inaccuracies since you claim there are none:
5) “The PEAR Installer has only ever shipped with the main PEAR Channel pre-registered. All other Channels needed to be manually located before use”
The PEAR Installer ships with these channels pre-registered:
pear.php.net
pecl.php.net
doc.php.net
Pyrus ships with these channels pre-registered:
pear2.php.net
pear.php.net
pecl.php.net
doc.php.net

6) “The main problem with the current Package Definition is that it often must be generated by a separate tool since it’s XML (it’s that thing everyone used before discovering YAML/JSON), and must explicitly list every file and piece of data within that format (with the exception of Pyrus which allows specifically formatted files to carry version and changelog information among other nuggets) optionally with each file’s digest hash. Even the Pyrus improvements still require specific files using specific formatted text and/or file names.”
a) pyrus has 2 commands that automatically generate the directory structure needed for package.xml, generate-ext and generate-pear2. The make command will auto-update the list of files in package.xml. This is all contained within pyrus, not an external tool.
b) “Even the Pyrus improvements require specific files using specific formatted text and/or file names”
This is a rather harsh way of saying that Pyrus will automatically pull the contents of a README file to populate the description of a package, and will recognize RELEASE-X.Y.Z as meaning that the package is now version X.Y.Z and the file contains the release notes, and LICENSE as containing the license text! CREDITS does need to be formatted in a particular way, but this is a rarely edited file, and it’s not exactly complex, and is clearly documented. Let me reiterate: the files required to avoid ever touching package.xml are README, LICENSE, RELEASE-X.Y.Z where X.Y.Z is the version number, and CREDITS. HOWEVER *none* of these files are required. If you already have information in package.xml, it will instead use that information.

As for containing no personal insults, that is true, but I never said it did contain any. I said I found the quality of the fact-checking personally offensive, which is not a good choice of words. I should have simply said that I find the tone and the inaccuracies that imply pyrus is worse than it actually is very annoying personally. This is what “personally offensive” means in my lexicon.

7) “I consider your post to be a final fruit of what is whispered in corners of the PHP community – that PEAR is stagnating and that there are those in positions of authority openly hostile and critical of anyone who dissents from their view of how PEAR should work”
Sorry, but this kind of rhetoric is not necessary, not helpful, and completely out of proportion to anything I said, as well as in direct contradiction to my repeated assertions of LIKING your ideas but disagreeing with the tone and the inaccuracies in this post. I’m not sure what you’re hoping will happen with this aside from perhaps scoring points from those who like a good flame war, but I am looking for something a little better than this: an honest representation of BOTH the strengths and the weaknesses of Pyrus and of PEAR. Perhaps this is impossible in this context, and that would be very sad.

For what it is worth, I am sorry my own emotions have distorted some of my phrase choices. I hope you can look past that and acknowledge that I am not out to get you, only out to get the record set straight on work I’ve poured my heart and soul into for years. If that makes me biased, I can’t help it, but it doesn’t necessarily imply that I am wrong. Please take another look and see if we can’t find some common ground!

http://cweiske.de/ cweiske

I documented that channel server REST API years ago: http://pear.php.net/manual/en/core.rest.php

http://blog.astrumfutura.com Pádraic Brady

1. I was agreeing with Mario’s comment in that reverse engineering did not appear to be a goal of PEAR. I myself did not claim documentation did not exist (as you originally asserted) which would be silly since I’ve been reading that very same documentation for a while now. I can see that I left the comment very easy to misinterpret.

2. Why would I suggest storing anything in a VCS was not possible when that would be blatantly false not to mention being the whole point of a VCS? I was noting that a channel cannot operate from a VCS – and probably should have added “directly”. You don’t put a git URI into the browser and watch it magically serve up XML channel files – you need a web host in addition to the git repository. It’s extra. It may cost money. Free alternatives may be thin on the ground, slow, restrictive or unreliable. People use Github Pages and Google Code for a reason. If you removed that requirement and delegated channel hosting to an Aggregator built for that purpose specifically… One less thing to worry about. Commit, push, tag…done. Sound familiar?

4. I agree that the Aggregator could change the source for a package. Though the Authority would also have to do the exact same thing. Both need to agree on the package source as a built in check. Also, by the same reasoning, PEAR is currently insecure. PEAR could at any second swap any package for any random bunch of malicious code. Why should we believe you are secure? For all we know, half the PEAR Group (and its package developers) are members of Lulzsec and currently editing packages to stuff SQL Injection vulnerabilities into thousands of Sony servers. Bear that in mind when comparing to other languages where editing packages has little to no oversight by the hosting organisation.

The source of packages is NOT hidden. Both the Authority and Aggregator must agree on a known URI resource (e.g. to channel.xml’s URL or a git repository URI) which can be communicated back to the end user (since they would directly download from that source themselves), perhaps when installing a package where it is displayed and a confirmation offered on whether to proceed. Now we have three parties which need to agree that a source is correctly associated with a package name. All three are fully aware of the source and its URI. Its content is a whole other story .

5. Fine. The PEAR Installer has only ever shipped with the main PEAR Channels pre-registered. All other Channels need to be manually located before use. I’ve edited the post to reflect this correction.

6. a) Never referred to directory setup in the article.
b) It may be harsh but it is nonetheless factually true. There are specific files and they do need a specific format and they are required for what I saw as “improvements” over the last-gen installer.

7. Yet I find the rhetoric difficult to apologise for. I wish I could but it is simply the impression I am left with.

Software evolves. It’s a fact of any programmer’s life. To expect Pyrus, with roughly 4 years of development and no stable release, to remain as is with no criticism is unrealistic. To expect an architecture far older than that to remain uncriticised is also unrealistic when it is popularly criticised (I’m sure there’s a reason why). Yes, people may be motivated to speak/contribute at various times and for various reasons. We’re in the middle of a grand scale framework renewal and git adoption period, for example, where distribution is enough of a significant concern to warrant a good old to and fro bout of criticism and debate.

Some of the ideas raised will be nuts, some will be stupid, some will build on PEAR, some would tear it down, some will be easy to implement, some will be almost impossible to. And through it all, there will be the same thread of someone criticising PEAR for what it currently is without referencing all the great things it has accomplished. Will they all receive the same encouragement to decide PEAR is not worth the high blood pressure?

Do you think people are incapable of dedicating time to their perceived problems? There are projects building on git, new packaging formats, methods of configured installations and more besides. All out there – with real code. Anyone with access to Google can find them. People are going to scratch their itches if nobody else does first. Or they’ll do it simply for kicks the next time Pyrus dependency management is noted for the umpteenth time (well, fifth if we’re counting separate blogs) as being too difficult to reproduce or when it’s pronounced that PEAR must be part of any solution.

Those who bother about these issues can either work with PEAR or around PEAR. Please, stop getting in our way. We’re trying and the discouragement you guys seem unaware of propagating is killing us.

The purpose of this entire blog is simple. I write crazy things, I debate them and then I write source code or spend weeks doing something equally valuable. Those three steps are evident across the blog’s history. It works, well, more often than it fails at least. The Zend Framework 2 will probably end up adopting Pyrus despite the concerns of everyone who voiced a shortcoming or potential improvement feature. I had hoped to make that experience more compelling and rewarding for end users even if it meant debating every idea I had down to the conclusion that it was indeed bat shit crazy.

Anonymous

I think that Pyrus could compare the timestamp of a package generation script with the package.xml and ask a user if he/she wanted to update it (and provide a diff of the files that would be added) on install. Alternatively, the package.xml could be updated to have a wildcard format so that installation from vcs would install whatever’s there, and the package.xml would be fleshed out when packaging. There is always an inherent risk that something will be flubbed (happens ALL the time with automated scripts) so eyeballing package.xml before distribution is always a good idea. I would want the new wildcard thing to be a new filename, perhaps in one of your fave formats.

Anonymous

cool

Anonymous

No other major-language package-management system that I’m aware of imposes this inspection-by-a-committee-of-gatekeepers entry barrier. > Certainly doesn’t exist in Perl, Python, or Ruby. I suspect that’s one reason why there’s so much more code in any one of CPAN, PyPI, or RubyGems than in PEAR.

Anonymous

Very well commented with relevant words. Very technical. I should say, this is a very great post. Guitar Made Easy

The fact is that most who hurl accusations at PEAR have no interest
in making PEAR relevant at all, don’t know anything about the tool
itself, and couldn’t care less. Not one person complained about PEAR’s
channels.