Phillip Smithtag:blogs.perl.org,2009-11-03:/users/phillip_smith//702012-03-08T17:34:19ZA blog about the Perl programming languageMovable Type Pro 4.38Installing DBD::mysql on Mac OS X 10.7 "Lion"tag:blogs.perl.org,2012:/users/phillip_smith//70.29162012-03-08T17:30:42Z2012-03-08T17:34:19ZI’ve run into this problem every time I’ve set-up a new computer recently, so — in the interest of remembering where to look for the solution next time — here’s a quick “note to future self” post on installing MySQL...Phillip Smithhttp://phillipadsmith.com
I’ve run into this problem every time I’ve set-up a new computer recently, so — in the interest of remembering where to look for the solution next time — here’s a quick “note to future self” post on installing MySQL and DBD::mysql on Mac OS X 10.7 “Lion.”

First, the binary installers for MySQL and PostgreSQL for OS X have gotten so good that I don’t bother compiling either from source anymore. In fact, the new PostgreSQL installer comes with a nifty “Application Stack Builder” tool that does 1-click post-installation installation of helpful PostgreSQL add-ons like PostGIS (previously, one of my least favourite things to install). The same goes for the MySQL binary installer: installing MySQL, a preference pane for starting/stopping the server, and a startup item to ensure that it’s always running after a reboot is all included.

Okay, so that’s great, but what about when I actually want to connect to MySQL from something like Perl? Well, to get started, I need to install the MySQL driver, DBD::mysql. Unfortunately, cpanm bails with this log message:

There are a few helpful posts out there on the Internets, but none of the suggestions resolved the issue for me.

In there end, there were two steps that were necessary to get things working:

Symlinking ‘libmysqlclient.XX.dylib’ from ‘/usr/local/mysql/lib’ to ‘/usr/lib/’ (where XX is the version of the library that is available to link to).

Passing arguments to the Makefile to ensure that the tests run using a proper MySQL user (the Makefile defaults to the system user running the tests, if no other user is provided, which fails for me as that user doesn’t exist).

There you go. That worked for me. Maybe it’ll work for you too. Enjoy.

]]>
#Catalyst on @dot_cloud: Adding a #PostgreSQL data service. (#Perl in the cloud, Part IIII)tag:blogs.perl.org,2011:/users/phillip_smith//70.20972011-08-16T04:15:56Z2011-08-16T20:06:31Z code { background: #fcf9ce; padding 2em; } Following up on my previous post that demonstrated how to get a basic Catalyst application up-and-running on dotCloud in under ten minutes, let’s explore how to take things a step further by...Phillip Smithhttp://phillipadsmith.com

Following up on my previous post that demonstrated how to get a basic Catalyst application up-and-running on dotCloud in under ten minutes, let’s explore how to take things a step further by adding a database service.

However, unlike the tutorial (or most Catalyst tutorials for that matter), we’re going to use PostgreSQL instead of SQLite — and we’re going to deploy the app into the cloud vs. just developing locally (thanks to the magic of dotCloud, which makes it so easy).

Following along with the tutorial, we go ahead and add Catalyst::Plugin::StackTrace to the base application module and the Makefile.PL, which ensures it will get auto-installed and built by dotCloud when we push our app. Here’s the commit on Github.

Next, we use the Catalyst::Helper script to create a controller for ‘books’ (and a simple test), and update the controller per the tutorial. Commit

Then, using the Catalyst::Helper script again, we create a simple view called HTML that will use Template Toolkit as its rendering engine. Finally, we set the component path to let the application know where to find the templates. Commit

Last but not least, we create the TT2 template to accompany the /books/list action. Commit

Now we diverge a little bit and head over to the PostgreSQL appendix and create our application’s database for managing books. This assumes that you’re familiar with PostgreSQL, have installed the PostgreSQL server and client and the Perl DBD::Pg module.

So, working locally for now, let’s create a user for this application and then a database per the instructions.

The data file provided by the appendix had a couple of typos, so I fixed this up here. Use that data file and load up your PostgreSQL database and check that everything loaded properly.

This creates the application’s database model files automatically from the database tables and relationships; see this commit.

Now, with the models auto-generated and some data in the database, we need to enable our model in our ‘books’ controller. Commit

At this point, you can check out your application locally to ensure that everything is running. In fact, this is a good point to mention a Catalyst development trick: If you run the development server with script/appname_server.pl -r option, the server reloads when you update an application file. Thus, if there’s an error, you can see it right away. I usually leave this window with the server output visible next to my editing window. Good for caching typos right away.

Okay, so finishing up, we configure the HTML view to use a ‘wrapper’ (think header, footer, etc.) for our action-specific views, and we add a CSS file, etc. Commit

Even though we’re not going to use them yet, to stay consistent with the tutorial, we update generated DBIx::Class result class files for many-to-many relationships. Commit

Great. That was all pretty straightforward, so let’s deploy this on dotCloud:

Add the additional requirement DBIx::Class to the make file. (In fact, I forgot a few requirements along the way — typical! — so let’s also add: Catalyst::Model::DBIC::Schema, DBD::Pg, Catalyst::View::TT, and MooseX::NonMoose. Curiously, I thought that MooseX::NonMoose should have been built as a dependency of Catalyst::Model::DBIC::Schema, but wasn’t … so I had to add it manually to the Makefile.)

Okay, now for the fun part, let’s add a PostgreSQL data service to our dotCloud instance by adding a couple lines to the dotcloud.yml file (Commit) as described in their documentation on PostgreSQL. Pretty simple, eh?

Now, let’s deploy these new files to dotCloud (note that our Catalyst application and the new data service are not connected yet) with dotcloud push catalyst . and watch dotCloud do it’s incredible magic of installing all of the CPAN modules that your Catalyst app needs. It really is magic.

If all goes well, you should see:

Deployment finished. Your application is available at the following URLs
www: http://9f385357.dotcloud.com/

Run dotcloud info catalyst.data and you should see something like:

Now, you just need to connect up your new data service with your app (well, almost, we’ll still need to create the remote database and load it with data). To do that, you can either put the database connection info directly into your lib/MyApp/Name/Model/DB.pm file, or read it from the dotCloud environment.json file.

However, at this point, if you put your dotCloud database connection info into your app your local development version is going to complain loudly and will stop being useful as a way to see what you’re doing before you push the app to the cloud. So, this becomes a good opportunity to get our local environment set-up to be as similar as possible as our cloud environment.

On dotCloud, the database connection information is automatically put into a handy environment.json file at the root of our dotCloud environment (/home/dotcloud/). So, to make things easy, let’s also create a environment.json file at the root of our application directory. So, my application root now looks like this:

And I set my local version of environment.json to match the variable names that dotCloud provides, but with my local connection information, like so:

Okay, we’re in the home stretch now! So, to finish things off:

To read these environment.json files, we can just add the handy JSON and IO:All modules to the Makefile. Commit

Now we can update our Model::DB file to read the environment.json on dotCloud if it exists, or to fall back to our local version if not.

We’re all set now to actually create the database (earlier, we simply created the data service). We’ll do that by running dotcloud run catalyst.data -- createdb default-catalyst. Note that this is using version 0.4 of the dotCloud command-line client — future versions might change this format. The .data targets the command to run for the data service that we set-up (vs. the www service running the app). If that all worked, you should see: # createdb default-catalyst

Last but not least, we load the data from our local development environment to the cloud database. There are probably other (possibly better!) ways to do this, but I found this approach straightforward: su - postgres and then ./bin/pg_dump default-catalyst | ./bin/psql -h XXXXXX.dotcloud.com -p XXXXX -U root default-catalyst. Obviously, replace the Xs with the sub-domain and port of your data service.

With all of that done — phew! — we can run one last dotcloud push catalyst . to push the latest changes to into the cloud, install any remaining dependencies, and restart nginx. If all went well, you should see:

Hopefully your PostgreSQL-backed app is now running in the cloud. Hurray!

If you find an error in this post, or have improvement suggestions, please let me know in the comments.

P.S. If your app is not running, the one thing to check that tripped me us is how dotCloud integrates with git. They key take-away is: be sure to commit your changes to git, or dotCloud won’t pick them up! Personally, I found this a bit confusing, and — in the future — I’ll probably use the dotcloud ssh catalyst.www command to do my dotCloud-specific debugging on dotCloud and then manually bring those changes back into the local version and then commit the changes. Without doing that, I ended up with a lot of unecessary commits in the repository as I futzed about with a connection issue.

]]>
@dotCloud loves Catalyst apps: Up-and-running in 10-minutes (#Perl in the cloud, Part III)tag:blogs.perl.org,2011:/users/phillip_smith//70.20672011-08-08T19:45:09Z2011-08-08T20:16:34Z code { background: #fcf9ce; padding 2em; } Lots is happening in the Perl Web framework world these days. The three main frameworks are getting better at a faster-and-faster rate, great screencasts are starting to appear, and — finally —...Phillip Smithhttp://phillipadsmith.com

Now, I’ve been known to kvetch a bit about “Perl in the cloud” once or twice before. But this is not a kvetch. No, no, my friend: this is a “Forget the ode, show me the code” post.

I cracked open an old project this weekend with the intention of getting back to work on it. I thought I’d give dotCloud another try, as I wanted a quick way to have this application interact with OAuth providers (hard to do on my local machine). But I quickly noticed that many of the existing Perl Web framework on dotCloud posts — of which thereareseveral — were a bit out of date, and didn’t reference dotCloud’s new command-line tool or way of deploying services. (Though, I should note, that they’re excellent posts and I steal from them liberally here.)

Anyway, after a bit of poking around, I had my app deployed on dotCloud and wanted to share the newer process for getting a bare-bones Catalyst application up-and-running in the cloud in 10-minutes:

I’m going to start by assuming that you have Catalyst running properly on your local development computer. If you’re not at that stage yet, you should probably read the Catalyst::Manual or grab a copy of the rather excellent The Definitive Guide to Catalyst.

Once you’ve got that sorted, run catalyst.pl App::Name to create the scaffolding for your application. I used Catalyst::Default for this example application, and I’ll use that app name throughout. Catalyst will create the scaffolding in a directory called App-Name — so, in my case, that’s Catalyst-Default.

Change your working directory to the app that you’ve just created, in my case that’s cd Catalyst-Defualt. Let’s call this your app’s root directory for convenience to this walk through.

Next, you’ll want to link your root/static directory from the app’s root directory, because that’s where dotCloud will look for static files. You can do this with a simple ln -s root/static static.

Then you’ll want to set-up support for PSGI. To do that, you can simply:

Now edit the ‘app.psgi’ file and add the line use lib 'lib' after the use warnings line. Here’s mine:

At this point, if you’re aiming to deploy a real application, you should update your Makefile.PL with the modules, plugins, and so on that are required by the application. By doing this, dotCloud is able to install all the required Perl modules from CPAN automatically. This is the most impressive part of dotCloud to me — it’s simply amazing to watch how it handles complex dependency chains without breaking a sweat.

If you’re just going to deploy the default Catalyst application that is built by the scaffolding to follow along, you’ll want to add requires 'Catalyst::Engine::PSGI'; to your Makefile.PL. Add that after the other lines that start with requires.

Now you’re ready to run your Makefile.PL (perl Makefile.PL). Without this step, dotCloud won’t be able to parse your Makefile.PL and you’ll be stuck updating the dotcloud.yml with your dependencies. Not the end of the world, but create unnecessary redundancy.

Use the dotcloud create appname command to create your app. In my case, I just used dotcloud create catalyst. (dotCloud doens’t seem to like names with any special characters, so you’ll need to choose something like ‘catalystdefault’ or ‘catalystappname’.)

You’ll need a dotcloud.yml file to tell dotCloud about the service your app requires — e.g., Perl, Python, Ruby, etc. — so fire up your favourite editor and open up dotcloud.yml and add these lines:

Save that file in the root directory of your app. At this point, your app’s root directory should look like so:

At this point, you’ll want to go grab a coffee or beer or something because, if you’ve done everything right, the dotCloud build system will review your Mekefile.PL for dependencies and start installing Catalyst in your cloud instance so that your app can run properly. If that’s happening, you should see something like:

At the end of the process, you should see this line:

Visit the URL that dotCloud provided in your browser and, if you’re lucky, you should see this.

If you get a 404 or some other page, there was a problem along the way. To troubleshoot, just run dotcloud logs catalyst.www (replacing ‘catalyst’ with the name you gave your app on dotCloud) — the ‘www’ indicates the name of the service you created in your dotcloud.yml. (We’ll dig a bit more into the different services in the next post.)

Let me know in the comments if you have any questions, if anything above is unclear (or could be clearer), or if you run into any problems. I’m not a dotCloud expert by any means, but I am starting to get my head around it.

Next up: Setting up a PostgreSQL data service.

]]>
From Perl slacker, to Perl hacker: Perl in the cloud, Part IItag:blogs.perl.org,2011:/users/phillip_smith//70.17362011-05-06T13:28:58Z2011-05-06T13:40:12Z Perl hacker Phillip Smith taunted us about the lack of Perl support; but more than our investor’s money, the real keys to the Perl stack have been the very insightful feedback and ideas of another major contributor to the...Phillip Smithhttp://phillipadsmith.com

Perl hacker Phillip Smith taunted us about the lack of Perl support; but more than our investor’s money, the real keys to the Perl stack have been the very insightful feedback and ideas of another major contributor to the Perl community: Tatsuhiko Miyagawa.

Sometime last week, while I was basking in the glory of my thirty-eight birthday, I got the best birthday present ever. “A beta account on dotCloud to try out their new Perl support,” you ask? Nope. The real gift was getting called a ‘Perl hacker’ in the same sentence as Tatsuhiko Miyagawa.

Seriously, though, a lot happened in the last week, and I was too busy slacking off to catch it until now.

For starters, dotCloud announced support for Perl (also on HackerNews) on their PaaS platform (thanks, Miyagawa! And, congrats on the new gig!). This is great news, and the Perl community has been quick to kick the tires; real Perl hackers have already tested several Web application frameworks on dotCloud:

]]>
Writing Perl Modules for CPAN tag:blogs.perl.org,2011:/users/phillip_smith//70.15982011-03-29T12:10:00Z2011-03-29T19:57:52Z This past weekend I dusted off my copy of Sam Tregar's Writing Perl Modules for CPAN. The book was published in 2002 and that gave me some trepidation about re-reading it almost nine years later. However, I not yet...Phillip Smithhttp://phillipadsmith.com

This past weekend I dusted off my copy of Sam Tregar's Writing Perl Modules for CPAN. The book was published in 2002 and that gave me some trepidation about re-reading it almost nine years later. However, I not yet written a CPAN module and I wanted to give it a shot -- so this seemed like a logical place to start. Thankfully, any reservations were misplaced: this book should be on the shelf of any person wanting to expand their Perl repertoire.

Here's a quick summary of what really stood out for me this time:

Chapter 2, incorrectly titled "Perl Module Basics," should be a must-read for anyone new to Perl. It answers all the burning questions about Perl modules. Specifically, why modules are an improvement over one-off scripts almost 99% of the time, the difference between compile-time and runtime in the life of a Perl program, the concept of Packages, Symbols, Encapsulation, Object-Oriented vs. Functional style modules, Classes, Accessors and Mutators, Inheritance, Overloading, and much more. I've rarely read a programming book chapter so full of valuable information and so clearly written.

Chapter 3 covers "Module Design and Implementation" and does it quickly and thoroughly. It starts with an explanation of Plain Old Documentation in the context of a documentation-first approach to module design, and then quickly jumps into design decisions like functions vs. objects, naming, and parameters and return values. With that information delivered to the reader, it jumps another level deep and provides a thorough exploration of inheritance vs. composition and the challenges of dealing with multiple-inheritance, and then wraps up with a quick overview of how to visually map out a module's design with UML.

Chapter 5 delves into "Submitting Your Module to CPAN," and is a good read for the process-related concerns of getting a module submitted. It touches on how to get feedback from the Perl community and the nitty-gritty details of the PAUSE upload process. Of course, being from 2002, it could use a quick refresh and a section on newer tools like Dist::Zilla.

Last but not least, Chapter 7 -- "Great CPAN Modules" -- is a worthy read. Sections on DBI, Storable, and LWP are particularly noteworthy, while others like CGI.pm could be updated with newer examples, and -- generally -- there are a number of great CPAN modules that could be unpacked here for the reader's benefit, e.g.: modules that take advantage of Moose, and perhaps one of the event-loop modules.

Obviously, that's just four chapters out of eleven that hit the mark. But, nonetheless, I would still recommend the book as a great resource just for those chapters alone. However, the rest of the book is not without value -- just a bit outdated for the era of "Modern" and "Effective" Perl. Specifically:

I'm guessing that most folks would agree that h2xs is a bit outdated, and that people should probably just jump right into reading about module-starter or Dist::Zilla instead.

The chapter on module maintenance and community could be updated to include mention of blogging, Twitter, and, most importantly, Github, as ways to build community, find contributors, and manage patches.

The last three chapters are all devoted to various ways to write CPAN modules in C, either using XS or Inline::C, which didn't feel as contemporary or timeless as other chapters. (That said, I've never wanted to write a Perl module in C, so maybe it's just a topic that doesn't interest me.)

Writing Web applications in Perl is relegated to the last chapter, but seems to be a pressing concern "with the kids" these days. It could easily be presented earlier and be updated to include sections on Catalyst, Dancer, Mojolicious, and Plack.

In summary, my sense is that Apress could fairly easily update this book for 2011, and that doing so would make a great resource for years to come. I must admit, I enjoyed Sam Tregar's clear, simple writing enough to almost forgive him for leaving the Bricolage project to work on Krang (inside joke).

So, what's on your Perl back catalog reading list these days?

]]>
Ten million dollars to DotCloud, but still no Perl support tag:blogs.perl.org,2011:/users/phillip_smith//70.15822011-03-23T10:00:00Z2011-03-24T16:40:02ZLet's face it: The promise of "PaaS" (Platform as a Service) -- easily deploying your application to a whole stack living up there in the cloud -- is pretty cool shit. Back in January (ages ago in Internet time), I...Phillip Smithhttp://phillipadsmith.com
Let's face it: The promise of "PaaS" (Platform as a Service) -- easily deploying your application to a whole stack living up there in the cloud -- is pretty cool shit. Back in January (ages ago in Internet time), I kicked the tires of DotCloud and, after a whole three minutes, announced to the world "this is just too easy." No wonder they just raised a cool ten million.

Well, when I asked the DotCloud folks back in January, they indicated that Perl support is on their road map, but that they didn't have a sense of what the most common use cases for deployment are in the Perl community, for example CGI, FastCGI, etc.

When I pointed the helpful gent who responded to my inquiry toward Plack (Perl's answer to Ruby's Rack, or Python's WSGI), he seemed to think that it could facilitate "a swift integration into our web stacks." Phew! One issue down.

The next question was about "worker scripts." Specifically, how the DotCloud system could take a "dependency file" (like a Gemfile for Ruby, or pip-style requirements.txt for Python) and "some code" (an installation script) that could be run to automatically install module dependencies and start-up a daemon or service. I must admit some confusion about this question -- I kept asking myself, is this not what a Makefile and things like Module::ScanDeps are for? Isn't this exactly what projects like cpanm are meant to do, i.e., ease the installation of Perl modules? This must be a solvable (if not solved) problem.

My guess is that I'm not not understanding the requirement clearly, and I'm hoping that the wisdom of the Perl community can weigh in with some suggestions and recommendations for me, our friends at dotCloud, and, well, anyone else that might be thinking about Perl in the cloud.

Needless to say, as PaaS start-ups get bought up faster than umbrellas on a rainy day in downtown Manhattan, my attention keeps turning back toward projects like Oyster, the North West England Perl Mongers initiative to create a Heroku specifically for Perl Web apps. Not only is a project like Oyster attractive for the practical benefits to (the rather large number of) Perl developers, but also as something that has potentially to not get instantly bought-up by Salesforce, Redhat, or Rackspace.

In fact, given how hot PaaS is, I can't help but wonder why some enterprising Perler hasn't gotten an offering off the ground yet? Or, perhaps more in the Perl spirit, why some group or individual hasn't offered to run it pseudo-voluntarily for the Perl community, like CPAN, PAUSE, or the Perl.org site itself. (Better idea: run it as a paid service and contribute the surplus to the The Perl Foundation, or Enlightened Perl Organization.)

Here's to hoping that $10,000,000.00 can push Perl into DotCloud's PaaS cloud offering, or that some whip-smart folks here in the Perl community can finally 'git push' Perl into the clouds. (And, while we're pushing, let's push for a Perl port of the excellent, open-source, libcloud project.)

UPDATE: A comment points out that Phenona might be attacking this challenge. Will look forward to hearing from folks who've tried it.

]]>
Git-backed wikis, Gollum, and simple installation experiencestag:blogs.perl.org,2010:/users/phillip_smith//70.9052010-08-16T12:47:31Z2010-08-16T12:55:47Z Last night I upgraded the Bricolage wiki on Github to the new git-backed wiki that Github rolled out last week. May sound like a trivial thing and not worth a blog post, but it’s quite the opposite, actually —...Phillip Smithhttp://phillipadsmith.com

The first really interesting thing about the upgrade is that all of a project’s wiki pages are now simple text files in their own git repository. Now I can update these pages anyway I like, in any one of several markup languages, including POD. On it’s own, that’s pretty useful — now I can clone a project’s wiki along with the project itself and submit changes back as I would any other changes via Git.

The second interesting thing about the upgrade is the offline viewing and editing tool that Github released called Gollum. This is a small Ruby on Sinatra application that — when run in the git-backed wiki repository — runs a local copy of the Github wiki that can be used to view and edit those wiki pages offline (see screenshot above).

On a final note: I remarked to Theory last night that I hadn’t played with RubyGems for a while and I was impressed at how painless and easy the Gollum installation was. He pointed out that ‘gem install’ almost always runs without any tests (unlike the cpan client). I did make me wonder about the best way to distribute a “mini app” like Gollum within the current Perl ecosystem of mini tools and micro frameworks… perhaps cpanm plus Mojolicious::Lite could create a similar “no brain required” installation?

Food for thought.

]]>
Aggregating mailing lists: To Plagger or not to Plagger? tag:blogs.perl.org,2010:/users/phillip_smith//70.8822010-08-11T15:18:16Z2010-08-11T15:26:15Z Over the last few years, I've come to rely on tools that summarize information for me: Being on a mailing list in digest mode and receiving a summary of activity once or twice a day is a great example...Phillip Smithhttp://phillipadsmith.com

Over the last few years, I've come to rely on tools that summarize information for me: Being on a mailing list in digest mode and receiving a summary of activity once or twice a day is a great example -- and it's also the specific challenge that I'm struggling with.

As my information consumption habits change, I find that I also want to customize how that information is delivered, and when, more than simply changing to digest mode allows. And, frankly, even in digest mode, I receive too many individual e-mails.

To resolve this, I end up setting my list subscriptions to "no mail" and fooling myself into thinking I'll peruse them on the Web from time-to-time. However, for those mailing lists, I'd actually would like to see what's happening regularly. So, last week I started brainstorming a tool that would send me a daily summary of all activity across a variety of different lists -- combining, say, all lists about security into one daily e-mail summary with links to the individual posts.

So, here's the challenge: half the lists are Mailman lists, the other half are Google Groups; half of them are public, the other half private.

After I'd sketched out the initial requirements, I thought to myself "I probably don't need to start from scratch, I'm pretty sure that my old friend Plagger -- the Perl-based "UNIX pipe programming for Web 2.0" -- can do most of this." Thus, I updated my Plagger source from Github and started digging...

Turns out that Plagger can handle two requirements already: consuming a Mailman archive and outputting it in a variety of ways, and -- obviously -- consuming the XML feed that's provided for a public Google Group. In the Plagger world, that would be as simple as this:

So, that part is surprisingly simple. The bigger challenge is dealing with private mailing lists, where:

If it's a Google Group, I need to authenticate to Google first (though, I only need to authenticate once for all lists, thankfully);

If it's Mailmain, I need to authenticate to each list with a different password.

So -- drumroll please -- here's the actual question: Should I simple develop a couple of new plugins for Plagger to handle Google Groups and private Mailman lists? Or should I develop something smaller that is focused on my specific set of requirements?

I think the question is really about the status of the Plagger project and community. I've developed a couple of plugins for Plagger already -- one that integrates bit.ly and another that fixes problem in the existing Publish::MT plugin -- but there are a couple things that bother me about going down this road again. Specifically:

It's difficult to get help for Plagger: the IRC channel is not really active, and the Plagger Google Group is full of spam.

The Plagger documentation is woefully incomplete and -- in some cases -- just plain wrong.

The existing Plagger Web site is terribly unhelpful, and more-often-than-not, contradictory (where do I get the latest Plagger source? From Subversion, or from Github?)

Lately, some folks have told me they looked at Plagger and then decided to develop their own aggregation tool instead of using it. So, What am I missing? Should I be heading down the "roll my own" path also?

There seems to be some recent activity on Plagger's github page -- a few forks anyway -- so the project is not entirely abandoned, it appears. However, it would be really comforting to see some efforts at making it easy for those of us who are working with Plagger to share configuration files, new plugins, and so on. Or maybe none of this should be a concern at all?

What do you think? What would you do?

]]>
Bricolage CMS hacking made easy! tag:blogs.perl.org,2010:/users/phillip_smith//70.7152010-07-05T20:13:50Z2010-07-05T20:29:06ZAfter my last post about Installing Bricolage 2 on Mac OS X 10.6 "Snow Leopard," I realized that there are a few more important steps that should be documented for those that was to hack on Bricolage CMS vs. just...Phillip Smithhttp://phillipadsmith.com
After my last post about Installing Bricolage 2 on Mac OS X 10.6 "Snow Leopard," I realized that there are a few more important steps that should be documented for those that was to hack on Bricolage CMS vs. just running it. The following instructions link up your git clone with the application itself, making it easy to apply changes, test them, and push them upstream.]]>
Preparations

Assuming you made it to the "Installing Bricolage" section of the previous post -- and maybe even got Bricolage running! -- you're now all set to tear down that installation and set-up your development environment. Thankfully, Bricolage provides a handy 'make dev' to do all of the hard work for you.

Now, instead of making using 'make dist' to create a distribution that can be installed with 'make install', we're going to use 'make dev' instead. However, before we do that, we're going to do two things:

Remove the existing Bricolage installation to ensure that we're starting with a fresh development installation. You can do that with 'sudo rm -r -f /usr/local/bricolage'. If you did any fancy configuration settings to get your Bricolage installation working the first time, you may want to back-up /usr/local/bricolage/conf/* first. Also, 'make dev' drops the Bricolage database and loads a fresh copy, so if did any work on your existing installation, you'll want to dump the database first.

The other thing to quickly think about is the location of the git clone you just created. The directories /Users/ and $HOME already have the appropriate permissions, but anything below those may not. Mine was initially in '/Users/username/Documents/Foldername' and that caused a number of problems for Apache relating to permissions on symlinks and so on. Thus, I created a '/Users/username/Development' directory and gave it the necessary 'drwxr-xr-x' permissions so that Apache would be happy. So now my git clone was in '/Users/username/Development/bricolage' and that's where I headed next.

Make Dev

Because I'm not using the OS X system-installed Perl, I wanted to ensure that I was building Bricolage from '/usr/local/bin/perl' and, thus, I ran '/usr/local/bin/perl Makefile.PL' to get things started.

Next, I ran 'make dev' and passed in a number of environmental variables for the installation that I wanted. Mine looks like 'sudo make dev BRICOLAGE_HTTPD_VERSION=apache2 BRICOLAGE_SSL=0' and tells 'make dev' to build for Apache2 without SSL. These are the same as the questions you are asked during a normal installation.

Once that's done, you should be able to run 'sudo /usr/local/bin/perl /usr/local/bricolage/bin/bric_apachectl start' and see Apache start-up successfully, at which point you can log-in to Bricolage as you did the first time.

The only difference is that your 'bin', 'comp', and 'lib' directories in /usr/local/bricolage/ are linked to '/Users/username/Development/bricolage/*' (or wherever you choose to create your git clone). Thus, you can make changes the files in your git clone directory and have those change reflected in the live Bricolage application that's running from '/usr/local/bricolage'.

Try it out. Make an improvement. Send us a pull request! :-)

Build Aliases

David Wheeler has -- in his own slightly-perfection-obsessed fashion -- taken this idea to new levels. As he explained on the Bricolage mailing list, using the approach outlined above -- passing environmental variables to 'make dev' -- and the Unix 'alias' command, he's able to quickly tear down and build new development versions of Bricolage to test different configurations. David calls these 'build aliases' and you can read more about it here.

That's it for today. Happy Bricolage hacking.

]]>
Not for the faint of heart: Installing Bricolage 2 on Mac OS X 10.6 "Snow Leopard"tag:blogs.perl.org,2010:/users/phillip_smith//70.7032010-07-02T17:37:30Z2010-07-05T20:46:46ZOkay, I admit it: Bricolage CMS -- the open-source enterprise-class content management system -- takes a few hours to install. The upside? A well-deserved sense of accomplishment. Seriously, as someone who works with Bricolage regularly and likes to contribute to...Phillip Smithhttp://phillipadsmith.com
Okay, I admit it: Bricolage CMS -- the open-source enterprise-class content management system -- takes a few hours to install. The upside? A well-deserved sense of accomplishment.

Seriously, as someone who works with Bricolage regularly and likes to contribute to the project (when time permits), it's incredibly helpful to be able to have it running locally on my laptop from the latest Github source.

Unfortunately, the Bricolage installation documentation for OS X needs some serious love. There are at least three contradictory resources at the moment: David Wheeler's post "My Adventures with Mac OS X" from 2002 (OS X 10.1), the README.MacOSX that Bricolage comes with, and the "Installing Bricolage on Mac OS X wiki page on Github, which only covers OS X 10.3. Thankfully, Theory (David Wheeler) is easy to find in the #bricolage channel on irc.perl.org and can be cajoled into providing helpful install hints.

All that said, installing Bricolage 2.0 on the current version of OS X -- 10.6.4 "Snow Leopard" -- was actually quite straightforward. So, before diving into updating all of the install documentation, I wanted to capture the basic process here and get some feedback on next steps. If you want to help with feedback, just jump to the Questions section at the end of this post.

]]>
Before you begin

Quoting from a Bricolage wiki page:

Xcode Tools (formerly Developer Tools): As of OS X 10.3, all of the development libraries, compilers, etc. are included in Xcode Tools. In 10.2 and prior, they were included in the Developer Tools. In order to compile anything on OS X, you need to install Xcode Tools. These are available for free from the Apple Developer Connection website. It is currently a 600+ MB download, so go make yourself something to eat while you wait.

So you'll need to download Xcode, or install it from your Snow Leopard DVD.

Installing the pre-requisites

Bricolage is a big application and it requires a non-trivial list of prerequisites to be installed. Notably a bunch of Perl modules, Expat, and libapreq. And, given that Bricolage is a modperl application, you'll also require Apache, modperl, and also a database like Postgres or MySQL.

Personally, I like to just run the latest version of just about everything, and my sense is that what many other folks will want to do also, so I elected to skip anything relating to Apache 1.3 or mod_perl 1.

So, referring to the three documents mentioned above, I got underway with the prerequisites as follows:

gdbm: I skipped installing gdbm on Theory's recommendation.

Expat: Pretty straightforward. Download the latest source (expat-2.0.1 in my case), untar it and enter directory, run './configure' and then 'make', 'make test', and 'sudo make install'.

Perl: This is a topic for a longer post, but quickly: OS X comes with Perl v5.10.0 these days, however many recommend installing a newer version in /usr/local/ for any serious Perl fun. Compiling your own Perl also helps to ensure that you don't mess up the system's Perl installation, and that future upgrades to the OS won't mess up your Perl.

I already had Perl v5.12.1 built on my laptop, but -- if you need to do that -- the basic steps to build and install Perl with all of the defaults are: download the latest source, untar the source, enter the source directory and type 'sh Configure -de', then 'make', 'make test', and 'sudo make install'. It's very straightforward. After it's installed, be sure to add something like 'export PATH=/usr/local/bin:$PATH' to your .profile or similar to use the new Perl by default.

Apache 2: I took the lazy route here and just used Theory's Apache 2 installation script. He's got some fancy Capistrano set-up that uses the above linked script to download, unpack, and compile Apache 2 in one step. If you want to do it manually, just follow the steps in that script.

Remember that you'll now have two versions of Apache 2 installed on your system -- the one installed by OS X, and the one you've just installed -- thus you'll want to update your .profile or similar to use it be default with something like 'export PATH=/usr/local/apache2/bin:${PATH}'.

A nice side-effect of using Apache 2 is that it there's no need to separately download and install mod_ssl, as it's already included. Same goes for OpenSSL, as it ships with OS X these days. So a few steps are saved there.

mod_perl2: This was a bit tricky, as there were various sources of conflicting advise on the best way to configure mod_perl2 on OS X. In the end, I went with the following: download the latest source, untar and enter source directory, and 'perl Makefile.PL', 'make -j8', make test (some tests fail), sudo make install.

After the successful install, make sure to follow the instructions presented and add the 'LoadModule perl_module modules/mod_perl.so' line to your Apache 2 httpd.conf file.

Testing everything so far

At this point, I wanted to ensure that everything was working smoothly before I proceeded. To do that, I started up Apache 2 with 'apachectl start' and confirmed that Apache 2 was running, and then ran 'apachectl -M' to list all of the loaded modules and looked to confirm that 'perl_module' was there.

Installing PostgreSQL was dead simple. I just followed Mark H. Nichols' write up, which included the manual steps to create a new user and group for Postgres now that Apple no longer ships the Netinfo Manager application.

If you're using Perl regularly, you'll probably have a bunch of the Perl modules installed already. If not, I would recommend something like Task::Kensho as a great way to install a set of "Enlightened Perl" modules that ease day-to-day development with Perl. Either way, here's a list of the modules that you'll want to install first:

XML::Parser

DBD::Pg

Test::File

Imager

And, also, most of the modules listed in Bundle::Bricolage. I used the bundle, even though it's a bit outdated and I didn't need mod_perl1 obviously, as I was already using mod_perl2. I had also installed libprereq in the steps above, so didn't need to install Apache::Request. You can decide which way you want to go.

In any case, using CPAN or the new cpanm to install the required modules, or the bundle, should get you 90% of the way to having Bricolage installed and running on your Mac or Hackintosh.

Installing Bricolage

Now you have two ways to go here. You can either download a distribution or get the latest source via Github. If you're going to do any development on Bricolage itself (patches greatly welcome!), you'll want to go the Github route. If you just want to get it running and/or don't have Git installed, you can skip some of these steps by downloading a distribution. Here's how I did it:

Cloned the public git repository with 'git clone git://github.com/bricoleurs/bricolage.git' and entered the 'bricolage' directory and ran 'perl Makefile.PL'.

If you're using the git source, you'll need to run 'make dist' to create a distribution from the source and enter the distribution directory that was created. From there, things are the same regardless of how you got the source.

Run 'make', 'sudo make test & make install' and answer the questions that the installer asks. If you followed the steps above, most of the questions should have sensible / usable defaults suggested and you can just accept them. I answered 'no' for SSL, as I wasn't planning to use it.

If successful, you should see the following:

Bricolage Installation Complete

You may now start your Bricolage server with the command (as root):

/usr/local/bricolage/bin/bric_apachectl start

If this command fails, look in your error log for more information:

/usr/local/bricolage/log/error_log

Once your server is started, open a web browser and enter the URL for your server:

http://your-local-host-name:8080

Login in as "admin" with the default password "change me now!". Your first action should be changing this password. Click "Logged in as Bricolage Administrator" in the top right corner of the browser window and change the password.

Pointers for documentation and lots of getting started advice are in the main README file in the unpacked distribution directory.

Open a browser, navigate to the URL provided and you should see the Bricolage log-in screen.

If you want to do some Bricolage hacking, you'll also want to run 'make dev' with the proper options from your source directory to link up the /usr/local/bricolage files to your git-managed files. (More on that shortly.) Ignore this for now. I'll do a separate post about it. Update: Here's the post.

Questions

So, after all that, I must task myself with cleaning up the installation documentation. However, before I do, my questions are:

I think most of this should simply be in README.MacOSX file and not in a separate Wiki page, or elsewhere. Sensible?

Should information still be provided on how to install Bricolage using the OS X-supplied version of Apache 2? Given the curious way that OS X lays out the various configuration files and how easy it is to install a /usr/local/ version, I don't see many advantages of supporting this approach -- do you?

Now that Bricolage supports Apache2 and modperl2, should the instructions still provide information on installing Apache 1.3 and modperl1? In the past, the Bricolage team has made a Herculean effort at providing information on almost any supported configuration, but I wonder if it would make sense to provide more detailed and regularly updated information on a "recommended configuration" vs. every possible configuration. Thoughts? My own thinking is -- given how painless it was to install with the more recent releases of Apache and mod_perl -- that the README.MacOSX should present the path of least resistance, and we can have supplemental pages in the wiki for other configurations.