After doing some extensive research, re-evaluating the PHP framework atmosphere, two frameworks and their respective “mini-me” counterparts seem to rule the seas.

In this post I’m simply sharing my opinion of Laravel and Symfony and their lighter-weight counterparts: Lumen and Silex. Drawing from my 15 years as a PHP developer, 10 as a professional (aka: full time employed) I am drawing from my perspective as a small business owner who does contract work and a government employee.

The winner is (for this developer): Symfony / Silex.

Let’s just start: I’m resting my laurels on Silex. (and Symfony).

As a business owner, it’s my job to choose the best balance between productivity and production for my clients. As leader of a development team, my concerns are pretty much the same, but I have to answer for more than being my own boss on the side. These are my points as to why I feel confident saying this (right now).

Quick note: Laravel is a very nice framework – if LTS was offered and there was a demonstrated history of design stability, my vote would be different.

1. Longevity.

Symfony offers LTS (long term support) for it’s versions, it’s corporation backed and has THE following of experienced developers doing their thing. This is extremely important to know, because that means the rug won’t be pulled out from under you from version to version. There’s a highly structured process in place for dealing with upgrades and notifying of BC breaks. Silex absolutely benefits from this.

Laravel on the other hand is more of a one-man show than Symfony. In my travels, I examined the version history of Laravel and Symfony, and I found that Laravel was simply too volatile for adopting into the environments that I work in. For lack of a better phrase, cutting edge in Laravel at times have clobbered those who chose to jump in with both feet at a more frequent cycle than Symfony. Lumen will have the same woes.

(Re-read that: at a more frequent cycle, PROGRESS has a cost – but there needs to be a methodical approach for enterprise environments to feel comfortable).

2. Opinionated

Lumen is easy to set up – because it’s opinionated. Opinions in my world mean you’re harder to “slip stream” into a code base refactor. Can you gut it and make it your own? Yes. But that’s just as much work as starting with something without an opinion, right?

Silex is barely opinionated. Almost no training wheels, which makes it more hostile for inexperienced developers.

That’s it!

Really, those are the things that rub me the wrong way. I think Laravel/Lumen have a lower learning curve (due to opinion) and you truly can get off the ground faster with Lumen in my opinion. However, these frameworks largely do the same thing.

This is a somewhat painful decision

I honestly like Laravel/Lumen a bit better in regards to learning them quickly. Laracasts are a true contribution to the community – not just bettering the cause of Laravel.

I think in a few years, Laravel will become the big kid on the block that it deserves to be – but right now there’s too many warning flags going off regarding API stability and decision making.

Somewhat painful?

Yes! Laravel and lumen ship with Eloquent out of the box. It uses the active record paradigm and AR can’t hold a candle to a data mapper pattern in terms of protecting yourself from your dependencies.

This really rubbed me the wrong way because it’s like giving a loaded gun to an inexperienced developer (or team). I expected a better ORM architecture for a framework for artisans.

I realize I can ignore it, and use Doctrine – but really, they should not have coupled an ORM from the get go. It’s a “crack factor” for the framework in my opinion.

That bug has been around for how long? I’m amazed folks with pitchforks haven’t come out on that one sooner. I myself have suffered great pains dealing custom session handlers for this exact bug. Shame shame! (At least it’s getting fixed)

Ever wonder how to properly use those packages installed from the require-dev section of composer.json?

Ideally you’d integrate them with your IDE, or perhaps set up your system path to access it via vendor\bin\phpunit – If you use PHPUnit, take a quick look at this on how to properly set up PHPUnit in PHPStorm on a per-project basis (because not all projects use the same PHPUnit version).;

Many years ago (in 2011) I wrote “interfaces are worthless“. For the most part they have remained mostly worthless for me as typically a superclass of sorts has proven to be a better solution for taxonomy and enforcing the exact typing rules I have criticized interfaces in PHP for in the past.

At about 6 minutes long, I threw together this screencast to show a method to involve your custom PHP CodeSniffer standards into your project workflow when using Composer. Essentially it covers the convenience of putting your standards into a Composer package and adding a wrapper to ‘extend’ the PHPCS shell/batch script to automatically detect your custom standards without having to install them system-wide in your development environment. *Best viewed in full screen*

I’ve recently worked on a customized emailing suite for a client that involves bulk email (shutter) and thought I’d do a write up on a few things that I thought were slick.

Originally we decided to use AWS SES but were quickly kicked off of the service because my client doesn’t have clean enough email lists (a story for another day on that).

The requirement from the email suite was pretty much the same as what you’d expect from SendGrid except the solution was a bit more catered toward my client. Dare I say I came up with an AWESOME way of dealing with creating templates? Trade secret.

Anyways – when the dust settled after the initial trials of the system and we were without a way to deliver bulk emails and track the SMTP/email bounces. After scouring for pre-canned solutions there wasn’t a whole lot to pick from. There were some convoluted modifications for postfix and other knick knacks that didn’t lend well to tracking the bounces effectively (or implementable in a sane amount of time).

Getting at the bounces…

At this point in the game, knowing what bounced can come from only one place: the maillog from postfix. Postfix is kind enough to record the Delivery Status Notification (‘DSN’) in an easily parsable way.

Pairing a bounce with database entries…

The application architecture called for very atomic analytics. So every email that’s sent is recorded in a database with a generated ID. This ID links click events and open reports on a per-email basis (run of the mill email tracking). To make the log entries sent from the application distinguishable, I changed the message-id header to the generated SHA1 ID – this lets me pick out which message ID’s are from the application, and which ones are from other sources in the log.

There’s one big problem though:

Postfix uses it’s own queue ID’s to track emails – this is that first 10 hex digits of the log entry. So we have to perform a lookup as such:

This is a problem because we can’t do two at the same time. The time when a DSN comes in is variable – this would lead to a LOT of grepping and file processing – one to get the queue ID. Another to find the DSN – if it’s been recorded yet.

We use Logstash where I work for log delivery from our web applications. In my experience with Logstash I learned that it is a tool with so much potential for what I was looking for. Progressive tailing of logs and so many built in inputs, outputs and filters for this kind of work it was a no brainer.

So I set up Logstash and three stupid-simple scripts to implement the plan.

Hopefully this is self explanatory to what the scripts take for input – and where they’re putting that input (see holding tank table above)

1

2

LogDSN.php%{QID}{%dsn}

LogOutbound%{QID}{%message-id}

Setting up logstash, logical flow:

Logstash has a handy notion of ‘tags’ – where you can have the system’s elements (input, output, filter) enact on data fragments tagged when they match a criteria.

Full config file (it’s up to you to look at the logstash docs to determine what directives like grep, kv and grok are doing)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

input{

file{

format=>"plain"

path=>"/var/log/maillog"

type=>"maillog"

}

}

filter{

kv{

type=>"maillog"

trim=>"<>"

}

grep{

type=>"maillog"

match=>["status","bounced"]

add_tag=>["bounce"]

drop=>false

}

grep{

type=>"maillog"

match=>["message-id","[0-9a-f]{40}\@dom"]

add_tag=>["send"]

drop=>false

}

grok{

type=>"maillog"

pattern=>"%{SYSLOGBASE} (?<QID>[0-9A-F]{10}): %{GREEDYDATA:message}"

}

}

output{

exec{

tags=>"bounce"

command=>"php -f /path/to/LogDSN.php %{QID} %{dsn} &"

}

exec{

tags=>"send"

command=>"php -f /path/to/LogOutbound.php %{QID} %{message-id} &"

}

}

Then the third component – the result recorder runs on a cron schedule and simply queries records where the message ID and DSN are not null. The mere presence of the message ID indicates the email was sent from the application. the DSN means there’s something to enact on by the result recorder script.

* The way I implemented this would change depending on the scale – but for this particular environment, executing scripts instead of putting things into an AMQP channel or the likes was acceptable.

Where I work we have unfortunately had to skip the 5.4 release of PHP; the release cycle between PHP 5.4 and PHP 5.5 was pretty darn fast and we never got around to replacing APC. We’ve finally got everything up to speed to adopt 5.5 when it hit’s stable release.

I figured I’d fill in some of the blank air by listing (even if a personal memo for myself) the things I find most exciting for the upcoming PHP 5.5 release.

Built in opcode cache and optimizer (Zend optimizer plus) (This is a biggie since APC never saw light of day in PHP 5.4, I suspect many are in the same position as we are with APC…

array_column()

Observation, or a gripe/complaint/whatever:

I’m not quite sure about the password_hash ‘suite’ functionality – it’s not clear what’s finally made it in, I’d assume everything? People that don’t understand hashing and encryption are probably going to be confused a bit more than they were before unless this addition is advocated heavily in the documentation for counterparts (e.g.: the md5 documentation page).

I understand that trivializing the process is beneficial to avoid self inflicted damage, however I always get a tad annoyed when we see things labled as ‘STRONG ENCRYPTION’ since that term is a moving target; I long to see better implementations and standards recognition, e.g.: a FIPS level .

Honorable mention

The generators addition gets an honorable mention – I think it will take some time to trend and the scenarios for use case are probably fairly low to save time on writing your own iterators.

Monolog is perhaps the most popular logging library out there for PHP at the time of this writing. It has a lot of support and a nice balance of features.

Unfortunately I have one gripe to make about the rather closed implementation of the SocketHandler , er, handler?

The problem with the Monolog SocketHandler (as of 1.4.1)

The key problem is that Monolog treats it complete with assumption that say, a TCP port can be connected to. So you can setup your chain of handlers and processors; but with a critical application such as logging, the SocketHandler simply let’s itself into the logger object without testing to see if it can make a TCP connection.

The problem is: there’s no pragmatic way to test if a SocketHandler object can connect; there’s only an isConnected() helper method – but no canConnect() or similar.

The solution…

The solution is a bit less pragmatic than I wanted, because the SocketHandler class has it’s key method: connect() as a private method. Thankfully the class has a ‘mock’ method to at least attempt a socket connection and not affect the state of the object. We can use that to probe if the socket can be opened, and add it as a handler to our logger object accordingly.

Example

PHP

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

<?

classSocketHandlerextends\Monolog\Handler\SocketHandler{

publicfunctioncanConnect(){

if($this->isConnected()){

returntrue;

}

if(($probe=$this->fsockopen()&&is_resource($probe)){

fclose($probe);

returntrue;

}

returnfalse;

}

}

?>

Then, you can have a little bit more assurance that you’ll get your logging to go through the handler without a nasty exception…

PHP

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

<?

$logger=newLogger(

'itn',

array(

newStreamHandler('/path/to/logfile')

)

);

$socketHandler=newSocketHandler('tcp://127.0.0.1:7065');

if($socketLogger->canConnect()){

$socketLogger->pushHandler($socketHandler);

}else{

$logger->alert('SocketHandler connection failed.');

}

?>

A final word…

In a high parity environment; the SocketHandler is still not immune to losing connectivity to the target socket. It would sadly, be better practice to make your own logger wrapper to safely handle logging to a file, and the SocketHandler. Perhaps sometime in the future the SocketHandler can be revised to (optionally) suppress itself from connection failures.

Wait, PHP wants to array_merge an array with… itself?

Take another look at this: array_merge(array $a, [ array …]);

If you’re good at reading API’s – you’ll see how … odd this is. Seeing as I just got nipped in the butt by forgetting to have another array to merge into – it’s curious as to why the hell it doesn’t enforce a minimum of two arguments… any guesses? Or should we tack this up as a valid, non-nit-picky pitfall of PHP? Otherwise, what are you merging into? Doesn’t make sense…

We heavily rely on running multiple instances of code where I work. At any given time several of us have several copies/branches of the site code configured to run from various spots on our development server.

A path we have gone down with APC’s user variable caching is merely one of convenience for the most part. There’s only one resource that comes to mind that requires caching.

At any rate, testing things in a multiple instance environment is prone to collisions. Much to my dismay, I find the APC configuration options lacking:

– Can’t set a global prefix – This could be wildly useful for shared hosts and for our scenario, set a global prefix and transparently prepend.

In a nutshell, start smart by generating a global prefix and being diligent with the prefix for your codebase. (Or, perhaps wrap APC to make this easier). It will pay off someday, I’m just glad we aren’t in too deep to quickly rewire it all.

I’ve gone over a similar issue like this before regarding the likes of git/hg. While those are developer tools and are less likely to be present on a production machine.

PHP 5.4 is jumping on the bandwagon to include a ‘cute’ little internal server – which is enabled by default.The ‘everything needs a standalone server’ thing is starting to get on my security nerves feel silly.

It has limited use, and most developers will have limited use for it due to it’s lack of mod_rewrite (and equiv.) behavior … The worse part is: You can’t disable it if you want to keep cli (e.g.: no pear!)

Wish I spoke up on the list!

Anywho, here’s a hob-knobbed patch (for PHP 5.4.0RC6) that will change that for you.(GNU/*nix only!) The patch adds a new configure option ‘–disable-cli-server’.

In a nutshell: A modest size POST to almost all PHP versions in the wild (Sans 5.3.9+) are in danger of an extremely simple DoS.

The vulnerability exploits the PHP internal hash table function (responsible for managing data structures) – more specifically: the technique used to ‘hash’ (generate a key for the hash table) the key for a key=>value relationship.

Apache has a built in limit of 8K max request length (that is, maximum size in request URL) by default.
Can the damage from an 8k request (this affects GET) – really cause the mentioned DDoS attack on reasonable hardware?

Additionally – PHP has a limiter on POST data too: max_post_size.
It’s this configuration variable in particular I think should be put in the limelight.

max_post_size is a run-time/htaccess configurable directive that maybe we don’t respect like we should.
Often, administrators (myself included) just tell php.ini to accept a large POST size to allow form based file uploads – It’s not uncommon to see:

1

2

upload_max_filesize=20M

post_max_size=21M# or multitudes more!

– in almost any respectable setup.

Perhaps we should evaluate the underlying effects of this setting; maybe it should be something stupidly low by default (enough to allow a large WYSIWYG CMS article’s HTML and a bit more? 32K?) – and then delegate a higher limit using Apache configuration.

Caveat: these settings are PER DIR meaning:

.htaccess use is limited, you can’t set the php_value in a .htaccess with a URL match – you’re stuck using a context sensitive .htaccess (within a dir) or use thedirective – this won’t work for people using front controllers through a single file on their websites/apps.

Modifying the actual vhost/host configuration is a sound bet – you can do Location/File matching and set these at will; for situated web apps, this may be a feasible decision to take whitelist or blacklist approach on uploader destinations.

More resources:

Here’s the video that thoroughly covers the vulnerability – I’ve shortcut it to their recommended mitigation (outside of polymorphic hashing):