Musings & Application Development

Archives

Laravel Events is one of those features that is critical to many applications and the applications I am currently working on are no different. When I first started using the Events I found that the setup was very simple to use and everything made sense – fire an event when something happens and setup a listener response that will react. As the docs state, this really is a simple observer implementation. The example in the docs is quite simple and also a very useful example:

This implementation is quite simple, but there are a few things that can be missed if you haven’t worked with a similar system before.

The event listener has to be declared before the event fires. This seems simple, but it caused me a few headaches when working in packages and realizing that everything had to be organized in a specific order so that the events would fire properly.

Data can be passed to the listener closure, but this is not necessary. You could just write the same function without passing $user and use Auth::user() to the same ends.

You can have multiple listeners for a single event and you can also give them a priority to decide in which order they will be triggered. Simply pass a third parameter after the closure with a numeric value (ie: 5). Keep in mind this is priority not order so the higher the number, the earlier it is triggered.

Putting it to Use

Everything above is more or less covered in the documentation and with a basic knowledge of how Events work. The next step I took was to start putting these into practice with a few simple examples that I found very useful.

Event::listen('user.logout',function($user){// Queue up session clean for this user// Remove any unpublished items in database for user});

Going Further With Events

After using Events extensively in smaller projects, we began a very large scale project at work that was going to be entirely package-based with a very small framework developed on top of Laravel that allowed us to create a global dashboard layout that had ways for packages to extend it and implement themselves inside it. For our purposes, this had to be more than just a layout – we needed to have one application that could be changed in it’s entirety for a second client without having to change the core code or our existing packages. Laravel and Composer make this very easy on the surface, but the hooks we needed inside the core were the biggest hurdle as to how we were going to implement this structure. Whether or not the following examples are the best way to do this is up for debate, but for our purposes it got the job done and it gave us an easy way to implement our extensibility.

Using Events as Hooks

The system we were building required that the main dashboard could have “hooks” that packages could grab onto and return data. We discussed a number of ways of doing this, but we decided that creating a rigid system for each hook made the most sense. What we ended up with was a well-documented system that allowed our packages to declare dashboard widgets, reporting widgets, CMS block types and necessary config values. Below is an example of one of these implementations.

Dashboard Widget

First thing we do is in our extending package’s service provider register method. We listen for the event widgets.dashboard.create and we return a very specific array of data that allows this package to define a widget. Note that we are returning an array of data from our listener – the Laravel docs don’t talk much about this.

Now that we have the listener setup, the event fire is pretty self-explanatory. Note that we set the variable $widgets to the response of the fire method – when we do this Laravel returns an array of all the results. Because of this, we can listen 6 times to this event and the $widgets will contain an array of 6 widget arrays for us to work with, in the order defined by our event priority.
The rest of the code is not as relevant to the events package, but we takes those arrays and process them based on the type defined in the array and then pass that data to the view.

Config Variables

Another issue we needed to solve for was allowing one global configuration section to handle configuration for all packages. We used a similar setup to the widgets and then used a simple key-value pair in the database that would handle configuration dynamically based on what values the packages requested be added. We debated countless ways of doing this with passing input types and other data sets, but in the end we decided to allow each package to make their own config views and then pass the view along. We also wanted to ensure that each package could define a sub-config section without having to tell the core that it exists. The routes we used in combination with the controller allowed us to check if any config/{type} existed and, if it exists, display the relevant views.

// Pass the Orders Config View to the Config Builder
\Event::listen('config.build.orders',function(){return'Orders::config.orders';});// Pass the Main Config View to the Config Builder
\Event::listen('config.build',function(){return'Orders::config.main';});

// Are we looking for a specific config page?// This code checks if we are on /config/$type or just on /config route. if(is_null($type)){$views= \Event::fire('config.build');if(count($views)>0){foreach($viewsas$v){// Load each view file into a config array for our main config view$data['config'][]= \View::make($v);}}}// We have a specific type. Check if we have views registered.else{// Fire a dynamic type event to retrieve views for a specific type of config page$views= \Event::fire('config.build.'.$type);// Check if we have more than one view for this config typeif(count($views)>0){foreach($viewsas$v){$data['config'][]= \View::make($v);}}else{return \Redirect::to('config')->with('warning','Could Not Find That Config Page');}}

Final Thoughts

The one downfall to this system may end up being our overhead. The basic implementation isn’t very taxing, but as we continue to develop we will be looking for ways to avoid firing events if there won’t be a listener for that request. This could be as simple as only registering config listeners when “config” exists in the current route.

The other issue, as noted above, is that there is likely a better way to do some of this. We chose to do it this way because the Events package was already there and had things like priority and data passing capabilities that we would need and there were few shortcomings from a developer perspective to implementing the system this way. As we continue to work on the application we may move more towards Events or away from it if we find an alternative, but for now it’s meeting our needs.

Recently I ran into an issue with an application I had developed where I made the mistake of not deleting some child table records when the parent record was deleted. After the app had been in beta for a month or two, I realized that I had thousands of records in the child tables that had no parents and were thus “orphan” records.

After juggling a few ideas in my head, I decided the best approach to this would be to make a temporary table of just the orphan items and then delete from the child table where the ID matches an ID in my orphans table. What does this look like in MySQL?

How Does it Work?
Essentially, the first line of the query makes a new temporary table that we will delete when we are done:CREATE TEMPORARY TABLE Orphans AS (

The second line selects the primary key (id) of every record in Child Table.
The third line creates a Left Outer Join with the parent table that used to have records associated with your orphaned records, based on whichever field you use to associate the two tables (RelationKey).
The fourth line specifies that we only want to select the ones where the RelationKey we provide is NULL, meaning it couldn’t find a parent element.

At this point we build the table and we now have a table of all orphaned records.
The last command just runs a DELETE query telling the database to delete any records that exist in both the ChildTable and the new Orphans table. Once that’s done, delete the Orphans table and the operation is done.

As I noted in an earlier post about my switch from CodeIgniter to FuelPHP, I have since started moving much of my development from FuelPHP to Laravel. Most recently I have been using Laravel 4 and have become acquainted and infatuated with Composer packages. Composer is bringing the package functionality that PHP needs and PEAR can’t possibly deliver to the current PHP community. Unfortunately, the current version (1.4 as I write this) of FuelPHP doesn’t have Composer baked in yet. The beauty of Composer is that installing it into an application is so simple that this is no longer a barrier.

Note: FuelPHP 2.0, which is major milestone and change for FuelPHP, will supposedly include Composer. There is talk of a 1.x version including Composer in order to bridge the gap between the current releases and the much changed 2.0 release.

Setting Up Composer to Work With FuelPHP 1.x

1) Download Composer. Although obvious, this is a necessary step. I find that command line installation is easiest. Navigate to the root directory of your application and run the following code:

curl -s https://getcomposer.org/installer | php

2) At this point you should have a composer.phar file installed in your application’s root directory. The next step is telling Composer what packages you want to add to your project. Composer uses a file named composer.json, so create this file in the root directory. The most basic example is laid out below, but look for more examples and information in the Composer documentation.

3) Now that we have the composer.phar file and a composer.json file with the packages we need, run the composer installation, which will download the packages and place them in the vendor directory specified in composer.json (in this case the default fuel vendor directory).

php composer.phar install

4) Once the installation has finished (may take some time, depending on the number of packages you have) we can assume that all packages are installed to the fuel/app/vendor directory and now we need to tell FuelPHP to look at this directory for our new packages.
Open your bootstrap file (fuel/app/bootstrap.php) and add the following code after the AutoLoader::register() that is already there:

require APPPATH.'vendor/autoload.php';

At this point you have finished the setup and installation of Composer.
The next step is to read the Composer getting started guide to make sure you’re up to speed with how it works.
The basics you will need to know right off the bat:

php composer.phar update – this command will update all your packages to the latest version based on criteria specified in your composer.json file.

php composer.phar dump-autoload – run this command whenever you add or remove packages so your autoload.php file can be updated.

Imagine this crazy situation:
I have two servers. One has the files I want and the other is empty. I need to move all the files from A to B. I have SSH access to both servers.
Easy, right? Not so much.

rsync – this is a great little tool for syncing two servers over SSH. The whole thing is just dreamy, with one glaring exception: when it goes to mkdir the new directories on server B it doesn’t throw the mkdir -p flag and therefore can only create 1 child directory.
ie: If I am syncing opt/git/repos/test to /opt/ on B and server B has nothing in opt, rsync will attempt to create a new directory /git/repos/test and will fail because mkdir can’t make 3 directories at once.

scp – this is another great tool for transferring files securely over SSH. It easily solves the rsync problem by being able to create any necessary directories into any sub-levels necessary. So, what’s the catch? It won’t check for existing files, which means you have to transfer every single file, every single time. When I’m transferring over 20gigs in files, this just won’t cut it.

wget – as bad as it sounds, wget started looking like a great option for what I need to do. wget will FTP into a server and recursively download everything. The entire process is fairly quick and it’s great about ignoring files if the exist locally. The first time I ran this I believed I had the solution I needed. Unfortunately, after working with Server B for a while I started to notice that some files were missing. Ends up that, despite apparently completing successfully, the command missed hundreds of files. Running it again continues to find more files that weren’t transferred, but the logs don’t explain why it’s ending prematurely.

unison – unison is a tool I installed based on it’s promising attempts to fix the limitations of rsync and other tools. Unfortunately, after installing and reading through the user manual, it appears that unison is a heavier tool than I wanted or needed and requires logic based on what files had previously been transferred, which had changed, etc. I may have missed something here, but the logic was far different than what I was looking for “out of the box” so I moved on.

The Solution?If you can address this correctly the first time it makes much more sense:

First Run – Use SCP to securely transfer all the files from Server A to Server B the first time.

Future Runs – Use rsync to compare the two servers and only download new/changed files.

The only downside to this situation is the fact that if you create two new child directories (ie: uploads/new/new2/) then rsync will still fail as it tries to create the two new directories at once. Alternatively, if the files are not substantial in size, you can tar them up and transfer as one large file.

What am I missing? There must be a better solution somewhere. All I’m asking for is rsync with a mkdir -p flag to create all the parent directories before it creates a child directory.

Task Management is one thing that I have always been extremely particular about. I have used everything from the pencil and paper combo to mobile apps that sync with my desktop apps. I intended to compile a list of all the apps I had tried with a blurb on each one and why I started using it and now no longer use it. However, the list is so long that that post no longer makes sense, but a quick overview of what I have discovered for myself is what follows.

Get Things Done

I don’t claim to be an expert on this idea by any stretch, but for quite a while I thought that GTD was the best method for tracking tasks and staying motivated to get them done. Unfortunately, over time I found that the over simplification of GTD apps and implementations meant that when I wanted or needed additional layers or organization they obviously weren’t there. Most of the time this was a good thing, but when it came down to finding an app that could handle all my tasks and projects, GTD no longer managed to cut it.

Project Management

After a stint with the overly simple GTD apps, I decided to take the polar opposite approach and try using Project Management applications and suites. Specifically, I tried Asana, Orchestra, activeCollab, Microsoft Project and a few others. I found that for my own projects and tasks, the extra features and levels of organization were a welcome change from the Get Things Done model. After a few months of dedication to this method, I had very few issues with this system – the only real issue I had was the time invested in updating and maintaining the projects. With every additional level of detail allowed, there was additional work involved in simple maintenance.

Hybrid

Attempting to find a healthy mix between GTD and Project Management eventually led me to Wunderkit. Wunderkit is an extension of sorts on the concepts that Wunderlist brought about. I spent signficant time with Wunderlist when I was trying GTD apps, but had never heard of Wunderkit. The switch to Wunderkit, in a nut shell, means that you have multiple instances of Wunderlist organized into projects. Each project also includes collaboration methods and sharing of your tasks and progress. For me, it became GTD with an additional level to separate out all the different projects, departments or contacts that I did business with. This solution is still in use today, but I can’t honestly say that it is my primary solution at this point as I rarely keep it updated and use it more for long term tasks and tracking only ongoing projects.

Where Does This Leave Me?

This leaves me with my current state, which came about by a quick Google search for creating tasks from emails automatically. I found that more often than not, when I wanted to remember something I would send myself an email reminder, which would sit in my inbox until I addressed it. As a proud advocate of the 0-inbox, my inbox has always been my bottom line task list of things that need attention. I decided that since this was the one task list that I never failed to use, it would make sense to try and find a way to integrate a task list with my emails themselves. This led me to TaskForce, a browser plugin that allows me to convert any email into a Task with follow-ups, notes and associated email tracking. It’s only been a few days, but so far this has helped to move all of my communication and task management into one specific area, allowing me to manage everything from my Gmail account. There are some definite downsides, including the fact that it is a browser plug-in that needs to be installed on each system you want to use it with and the fact that I have yet to find any additional apps or support for mobile integration, but so far the experience has been very simple and hassle-free, while letting me use emails themselves to add additional information and communications to tasks.

I’m certainly not the first to make the switch and I’m certain I won’t be the last. For years, CodeIgniter was my framework of choice for PHP applications of almost any size. The URI classes, simple helper integration and MVC structure were easy to use and light enough to stay out of the way during application development. For my most recent project, I needed something more modular, possibly with a full blown HMVC integration.

After extensive research into HMVC options for CodeIgniter, I stumbled upon fuel and the core functions they had built in from the bottom up. After a few days of reading, I was sold on the idea of using fuel for my newest project. It’s been a few weeks now and I can definitely say that this was a decision I will never regret. Rather than going into a discussion of the differences between CodeIgniter and fuelPHP (which has been done before quite well), I will just summarize some of my favourite features.

Built in Modular structure – this was the reason that led to me stumble upon fuelPHP in the first place. Fuel doesn’t force you into using modules or an HMVC file structure, but if you choose to implement them the process is well documented and quite easy to integrate. Once you start creating apps in a modular fashion, it becomes obvious why this structure has clear benefits. I won’t go into the benefits of HMVC, but a quick read around will find various questions and answers on this topic.

ORM in the Core – When reading about the ORM Class in the core of fuel, I had no intentions of using it for my applications; I was always comfortable using models to extend standard SQL queries or using the Active Record class in CodeIgniter. I decided to give the ORM package a chance for one module in the new application and ended up using it for every module so far. The ORM maps a model to each table in the database, establishes the fields of the table and the relationships to other tables.

Complete Flexibility – One of the many benefits of fuel is that there are very few restrictions on how you write code. Classes can be in any file structure you want, you can declare any folder as a “modules” folder, you can easily extend native classes and there are countless other examples of how fuel lets you write code in your own way.

Rockstar Team Running the Show – One thing I often heard about with CodeIgniter was that EllisLab was creating CodeIgniter more to fit it’s own needs than those of the community. The slow process for change was also blamed on restrictions on community development put in place by EllisLab. It was refreshing to read up on fuelPHP and know that not only is the team made up of incredible minds, but community was the focus of development.

PHP 5.3 – Although this may cause issues for some hosting environments (I had to update my own VPS to the latest version manually), the use of PHP 5.3 means that fuelPHP can take advantage of all the latest features that PHP introduced since the times of PHP 4.

Community Enthusiasm – One of the biggest reasons I fell in love with CodeIgniter was the massive community following and development. Libraries and modifications were being developed and released constantly, which was helping to keep the framework moving forward when the core was moving at a slower pace. This is the same reason I have fallen in love with fuel; the community is onfire aboutfuel. Not only are they excited about it, but they’re helping develop modules, packages and the core at an incredible pace.

There are countless other things I have found myself loving about fuel, but these are the most obvious ones at a general level. At this point I have begun developing all my applications in a modular structure. I have been scouring GitHub for fantastic packages (like Warden) that I can extend to save time and use to learn from other experts.

Update:
I feel it is only fair to disclose the fact that much of this love has since passed. Although I still think fuelPHP has some great features and functionality, the community, development speed and brilliance over at Laravel has converted me. See you in #laravel on Freenode.