I'm a web developer in Norfolk. This is my blog...

Anonymous classes were added in PHP7, but so far I haven’t made all that much use of them. However, recently I’ve been working on building a simple dependency injection container for learning purposes. This uses the PHP Reflection API to determine how to resolve dependencies. For instance, if it’s asked for a class for which one of the dependencies required by the constructor is an instance of the DateTime class, it should create an instance, and then pass it into the constructor automatically when instantiating the class. Then it should return the newly created class.

Mocking isn’t really a suitable approach for this use case because the container needs to return a concrete class instance to do its job properly. You could just create a series of fixture classes purely for testing purposes, but that would mean either defining more than one class in a file (violating PSR-2), or defining a load of fixture classes in separate files, meaning you’d have to write a lot of boilerplate, and you’d have to move between several different files to understand what’s going on in the test.

Anonymous classes allow you a means to write simple classes for tests inline, as in this example for retrieving a very basic class. The tests use PHPSpec:

<?php

namespacespec\Vendor\Package;

useVendor\Package\MyClass;

usePhpSpec\ObjectBehavior;

useProphecy\Argument;

useDateTime;

classMyClassSpecextendsObjectBehavior

{

functionit_can_resolve_registered_dependencies()

{

$toResolve = newclass{

};

$this->set('Foo\Bar', $toResolve);

$this->get('Foo\Bar')->shouldReturnAnInstanceOf($toResolve);

}

}

You can also define your own methods inline. Here we implement the invoke() magic method so that the class is a callable:

<?php

classMyClassSpecextendsObjectBehavior

{

functionit_can_resolve_registered_invokable()

{

$toResolve = newclass{

publicfunction__invoke(){

returnnew DateTime;

}

};

$this->set('Foo\Bar', $toResolve);

$this->get('Foo\Bar')->shouldReturnAnInstanceOf('DateTime');

}

}

You can also define a constructor. Here, we’re getting the class name of a newly created anonymous class that accepts an instance of DateTime as an argument to the constructor. Then, we can resolve a new instance out of the container:

<?php

classMyClassSpecextendsObjectBehavior

{

functionit_can_resolve_dependencies()

{

$toResolve = get_class(newclass(newDateTime) {

public $datetime;

publicfunction__construct(DateTime $datetime)

{

$this->datetime = $datetime;

}

});

$this->set('Foo\Bar', $toResolve);

$this->get('Foo\Bar')->shouldReturnAnInstanceOf($toResolve);

}

}

For classes that will extend an existing class or implement an interface, you can define those inline too. Or you can include a trait:

<?php

classMyClassSpecextendsObjectBehavior

{

functionit_can_resolve_dependencies()

{

$toResolve = get_class(newclass(newDateTime) extendsFooimplementsBar{

public $datetime;

publicfunction__construct(DateTime $datetime)

{

$this->datetime = $datetime;

}

useMyTrait;

});

$this->set('Foo\Bar', $toResolve);

$this->get('Foo\Bar')->shouldReturnAnInstanceOf($toResolve);

}

}

In cases where the functionality is contained in a trait or abstract class, and you might need to add little or no additional functionality, this is a lot less verbose than creating a class the conventional way.

None of this is stuff you can’t do without anonymous classes, but by defining these sort of disposable fixture classes inline in your tests, you’re writing the minimum amount of code necessary to implement your test, and it’s logical to define it inline since it’s only ever used in the tests. One thing to bear in mind is that anonymous classes are created and instantiated at the same time, so you can’t easily create a class and then instantiate an instance of it separately. However, you can instantiate one, then use the get_class() function to get its class name and use that to resolve it, which worked well for my use case.

Another use case for anonymous classes is testing traits or abstract classes. I generally use Mockery as my mocking solution with PHPUnit tests, but I’ve sometimes missed the getMockForTrait() method from PHPUnit. However, another option is to instantiate an anonymous class that includes that trait for testing purposes:

<?php

$item = newclass() {

useMyTrait;

};

This way, your test class is as minimal as possible, and you can test the trait/abstract class in a fairly isolated fashion.

The project I’m currently working on is a textbook example of what happens when a project uses jQuery when it really ought to use a proper Javascript framework, or it starts out just using jQuery and grows out of all proportion. It’s also not helped by the fact that historically it’s just been worked on when new functionality needs to be added, meaning that rather than refactoring the code base, it’s been copied-and-pasted. As a result, there’s lots of repetitive code in desparate need of refactoring, and huge reams of horrible jQuery spaghetti code.

When I first took over responsibility for the project, I integrated Laravel Mix into it so that I had the means to refactor some of the common functionality into separate files and require them during the build process, as well as use ES6. However, this was only the first step, as it didn’t sort out the fundamental problem of repetitive boilerplate code being copied and pasted. What I needed was a refactor to use something more opinionated. As it happened, I was asked to add a couple of modals to the admin, and since the modals were one of the worst parts of the admin in terms of repetitive code, they were a strong candidate for implementing using a more suitable library.

I looked at a few options:

I’ve used Angular 1 quite successfully in the past, but I didn’t really want to use a framework that was being killed off, and it would be difficult to retrofit into a legacy application

Angular 2+ is actively maintained, but it would again be difficult to retrofit it into a legacy application. In addition, the need for TypeScript would make it problematic.

Vue was a possibility, but it did a bit too much for this use case, and it wasn’t all that clear how to retrofit it to an existing application

Eventually, I settled on React.js, for the following reasons:

It has a preset in Laravel Mix, making it easy to get started with it.

It has a very limited target - React is closely focused on the view layer, dealing only with rendering and event handling, so it does just what I needed in this case.

It has a strong record of use with legacy applications - after all, it was created by Facebook and they added it incrementally.

It’s easy to test - Jest’s snapshot tests make it easy to verify the rendered content hasn’t changed, and using Enzyme it’s straightforward to test interactions with the component

Higher order components provide a straightforward way to share functionality between components, which I needed to allow different modals to deal with another modal in the same way.

By creating a series of components for common user interface elements, I could then re-use those components in future work, saving time and effort.

However, it wasn’t entirely clear how I might go about integrating React into a legacy application. In the end, I managed to figure out an approach which worked.

Normally, I would create a single root for my application, something like this:

import React from'react';

import ReactDOM from'react-dom';

import App from'./components/App';

ReactDOM.render(

<App />,

document.getElementById('root')

);

However, that wasn’t an option here. The existing modals were using jQuery and Bootstrap, and the new modals had to work with them. I therefore needed to have only certain parts of the UI managed with React, and the rest wouldn’t be touched. Here’s an example of how I rendered the modal in the end:

import React from'react';

import ReactDOM from'react-dom';

import higherOrderComponent from'./components/higherOrderComponent';

import modalComponent from'./components/modalComponent';

const Modal = higherOrderComponent(modalComponent);

window.componentWrapper = ReactDOM.render(

<Modal />,

document.getElementById('modalTarget')

);

window.componentWrapper.setState({

foo: 'bar'

});

By extracting the duplicate functionality into a higher order component, I could easily wrap the new modals in that component and share that functionality between the modals. I could then render each component in a different target element, and assign it to a variable in the window namespace. The div with a ID of modalTarget needed to be added in the appropriate place, but otherwise the HTML didn’t need to be touched, since the required markup was in the React component instead.

Then, when I needed to change a value int the statee of the component, I could just call window.componentWrapper.setState({}), passing through the values to set, and these would propogate down to the child components as usual. I could also render multiple different modal components on the page, and refer to each one separately in order to set the state.

This isn’t an approach I’d recommend on a greenfield project - state isn’t really something you should be setting from outside a component like this, and normally I wouldn’t do it. However, it seemed to be the easiest way for this particular use case. Over time I’ll port more and more of the UI over to React, and eventually it won’t be necessary as I’ll be storing the application state in something like Redux.

There was a time not so long ago when jQuery was ubiquitous. It was used on almost every website as a matter of course, to the point that many HTML boilerplates included a reference to the CDN.

However, more and more I think it’s probably unnecessary for two main use cases:

jQuery is probably unnecessary for many web apps with simple Javascript

When jQuery first appeared, IE6 was commonplace, and browser API’s were notoriously inconsistent. jQuery was very useful in ironing out those inconsistencies and helping to make the developer’s experience a bit better.

Nowadays, that’s no longer the case. Internet Explorer is on its way out, with IE11 being the only version still supported by Microsoft, and it’s becoming increasingly hard to justify support for older versions, especially with mobile browsers forming a bigger than ever chunk of the market. We’ll probably need to continue supporting IE11 for a good long while, and possibly IE10 for some time too, but these aren’t anything like as bad to work with as IE6. It’s worth noting that newer versions of jQuery are also dropping support for these older browsers, so in many ways it actually does less than it used to.

jQuery is insufficient for web apps with complex Javascript

Nowadays, there’s a lot of web applications that have moved big chunks of functionality from the server side to the client side. Beyond a certain (and quite small) level of complexity, jQuery just doesn’t do enough to cut it. For me personally, the nature of the projects I work on means that this is a far, far bigger issue than the first one.

I used to work predominantly with Phonegap, which meant that a lot of functionality traditionally done on the server side had to be moved to the client side, and for that jQuery was never sufficient. My first Phonegap app started out using jQuery, but it quickly became obvious that it was going to be problematic. It wound up as a huge mass of jQuery callbacks and Handlebars templates, which was almost impossible to test and hard to maintain. Given this experience, I resolved to switch to a full-fledged Javascript framework next time I built a mobile app, and for the next one I chose Backbone.js, which still used jQuery as a dependency, but made things more maintainable by giving a structure that it didn’t have before, which was the crucial difference.

The more modern generation of Javascript frameworks such as Vue and React, go further in making jQuery redundant. Both of these implement a so-called Virtual DOM, which is used to calculate the minimum changes required to re-render the element in question. Subsequently using jQuery to mutate the DOM would cause problems because it would get out of sync with the Virtual DOM - in fact, in order to get a jQuery plugin working in the context of a React component, you have to actively prevent React from touching the DOM, thereby losing most of the benefits of using React in the first place. You usually see better results from using a React component designed for that purpose (or writing one, which React makes surprisingly simple), than from trying to shoehorn a jQuery plugin into it.

They also make a lot of things that jQuery does trivially easy - for instance, if you want to conditionally show and hide content in a React component, it’s just a case of building it to hide that content based on a particular value in the props or state, or filtering a list is just a case of applying a filter to the array containing the data and setting the state as appropriate.

In short, for single-page web apps or other ones with a lot of Javascript, you should look at other solutions first, and not just blithely assume jQuery will be up to the task. It’s technically possible to build this sort of web app using jQuery, but it’s apt to turn into a morass of spaghetti code unless approached with a great deal of discipline, one that sadly many developers don’t have, and it doesn’t exactly make it easy to promote code reuse. These days, I prefer React for complex web apps, because it makes it extremely intuitive to break my user interface up into reusable components, and test them individually. Using React would be overkill on brochure-style sites (unless you wanted to build it with something like Gatsby), but for more complex apps it’s often a better fit than jQuery.

So when should you use jQuery?

In truth, I’m finding it harder and harder to justify using it at all on new builds. I use it on my personal site because that’s built on Bootstrap 3 and so depends on jQuery, but for bigger web apps I’m generally finding myself moving to React, which renders it not just unnecessary for DOM manipulation, but counter-productive to use it. Most of what I do is big enough to justify something like React, and it generally results in code that is more declarative, easier to test and reason about, and less repetitive. Using jQuery for an application like this is probably a bad idea, because it’s difficult (not impossible, mind, if you follow some of the advice here, use a linter and consider using a proper client-side templating system alongside jQuery) to build an elegant and maintainable Javascript-heavy application.

As a rule of thumb, I find anything which is likely to require more than a few hundred lines of Javascript to be written, is probably complex enough that jQuery isn’t sufficient, and I should instead consider something like React.

I doubt it’d be worth the bother of ripping jQuery out of a legacy application and rewriting the whole thing to not require it, but for new builds I would think very hard about:

Whether jQuery is sufficient, or you’d be better off using something like React, Vue or Angular

If it is sufficient, whether it’s actually necessary

In all honesty, I don’t think using it when it’s technically not necessary is as much of a big deal as the issue of using it when it’s not really sufficient. Yes, dowloading a library you technically don’t need for a page is a bad practice, and it does make your site slower and harder for users on slow mobile connections, but there are ways to mitigate that such as CDN’s, caching and minification. If you build a web app using jQuery alone when React, Vue or Angular would be more suitable, you’re probably going to have to write a lot more code that will be difficult to maintain, test and understand. Things like React were created to solve the problems that arose when developers built complex client-side applications with jQuery, and are therefore a good fit for bigger applications. The complex setup does mean they have a threshold below which it’s not worth the bother of using them, but past that threshold they result in better, more maintainable, more testable and more reusable code.

Now React is cool, you hate jQuery, you hipster…

Don’t be a prat. Bitter experience has taught me that for a lot of my own personal use cases, jQuery is insufficient. It doesn’t suck, it’s just insufficient. If for your use case, jQuery is sufficient, then that’s fine. All I’m saying is that when a web app becomes sufficiently complex, jQuery can begin to cause more problems than it solves, and that for a sufficiently complex web app you should consider other solutions.

I currently maintain a legacy application that includes thousands of lines of Javascript. Most of it is done with jQuery and some plugins, and it’s resulted in some extremely repetitive jQuery callbacks that are hard to maintain and understand, and impossible to test. Recently I was asked to add a couple of modals to the admin interface, and rather than continuing to add them using jQuery and adding more spaghetti code, I instead opted to build them with React. During the process of building the first modal, I produced a number of components for different elements of the UI. Then, when I built the second one, I refactored those components to be more generic, and moved some common functionality into a higher-order component so that it could be reused. Now, if I need to add another modal, it will be trivial because I already have those components available, and I can just create a new component for the modal, import those components that I need, wrap it in the higher-order component if necessary, and that’s all. I can also easily test those components in isolation. In short, I’ve saved myself some work in the long run by writing it to use a library that was a better fit.

It’s not like using jQuery inevitably results in unmaintainable code, but it does require a certain amount of discipline to avoid it. A more opinionated library such as React makes it far, far harder to create spaghetti code, and makes code reuse natural in a way that jQuery doesn’t.

Apologies if some of the spelling or formatting on this post is off - I wrote it on a long train journey down to London, with sunlight at an inconvenient angle.

Recently I had to carry out some substantial changes to the legacy web app I maintain as the lion’s share of my current job. The client has several channels that represent different parts of the business that would expect to see different content on the home page, and access to content is limited first by channel, and then by location. The client wanted an additional channel added. Due to bad design earlier in the application’s lifetime that isn’t yet practical to refactor away, each type of location has its own model, so it was necessary to add a new location model. It also had to work seamlessly, in the same way as the other location types. Unfortunately, these branch types didn’t use polymorphism, and instead used large switch statements, and it wasn’t practical to refactor all that away in one go. This was therefore quite a high-risk job, especially considering the paucity of tests on a legacy code base.

I’d heard of the concept of a golden master test before. If you haven’t heard of it before, the idea is that it works by running a process, capturing the output, and then comparing the output of that known good version against future runs. It’s very much a test of last resort since, in the context of a web app, it’s potentially very brittle since it depends on the state of the application remaining the same between runs to avoid false positives. I needed a set of simple “snapshot tests”, similar to how snapshot testing works with Jest, to catch unexpected breakages in a large number of pages, and this approach seemed to fit the bill. Unfortunately, I hadn’t been able to find a good example of how to do this for PHP applications, so it took a while to figure out something that worked.

Because this application is built with Zend 1 and doesn’t have an easy way to get the HTML response without actually running the application, I was forced to use an actual HTTP client to fetch the content while the web server is running. I’ve used Mink together with Behat many times in the past, and the Goutte driver is fast and doesn’t rely on Javascript, so that was the best bet for a simple way of retrieving the HTML. Had I been taking this approach with a Laravel application, I could have populated the testing database with a common set of fixtures, and passed a request object through the application and captured the response object’s output rather than using an HTTP client, thereby eliminating the need to run a web server and making the tests faster and less brittle.

Another issue was CSRF handling. A CSRF token is, by definition, generated randomly each time the page is loaded, and so it broke those pages that had forms with CSRF tokens. The solution I came up with was to strip out the hidden input fields.

When each page is tested, the first step is to fetch the content of that page. The test case then checks to see if there’s an existing snapshot. If not, the content is saved as a new snapshot file. Otherwise, the two snapshots are compared, and the test fails if they do not match.

Once that base test case was in place, it was then straightforward to extend it to test multiple pages. I wrote one test to check pages that did not require login, and another to check pages that did require login, and the paths for those pages were passed through using a data provider method, as shown below:

<?php

namespaceTests\GoldenMaster;

useTests\GoldenMasterTestCase;

classGoldenMasterTestextendsGoldenMasterTestCase

{

/**

* @dataProvider nonAuthDataProvider

*/

publicfunctiontestNonAuthPages($data)

{

$this->goto($data)

->saveHtml()

->assertSnapshotsMatch();

}

publicfunctionnonAuthDataProvider()

{

return [

['/login'],

];

}

/**

* @dataProvider dataProvider

*/

publicfunctiontestPages($data)

{

$this->loginAs('foo', 'bar')

->goto($data)

->saveHtml()

->assertSnapshotsMatch();

}

publicfunctiondataProvider()

{

return [

['/foo'],

['/bar'],

];

}

}

Be warned, this is not an approach I would advocate as a matter of course, and it should only ever be a last resort as an alternative to onerous manual testing for things that can’t be tested in their current form. It’s extremely brittle, and I’ve had to deal with a lot of false positives, although that would be easier if I could populate a testing database beforehand and use that as the basis of the tests. It’s also very slow, with each test taking three or four seconds to run, although again this would be less of an issue if I could pass through a request object and get the response HTML directly. Nonetheless, I’ve found it to be a useful technique as a test of last resort for legacy applications.

In a previous post, I used the pipeline pattern to demonstrate processing letters using optical recognition and machine learning. The pipeline pattern is something I’ve found very useful in recent months. For a sequential series of tasks, this approach can make your code easier to understand by allowing you to break it up into simple, logical steps which are easy to test and understand individually. If you’re familiar with pipes and redirection in Unix, you’ll be aware of how you can chain together multiple, relatively simple commands to carry out some very complex transformations on data.

A few months back, I was asked to build a webhook for a Facebook lead form at work. One of my colleagues was having to manually export CSV data from Facebook for the data, and then import it into a MySQL database and a Campaign Monitor mailing list, which was an onerous task, so they asked me to look at more automated solutions. I wound up building a webhook with Lumen that would go through the following steps:

Get the lead ID’s from the webhook

Pull the leads from the Facebook API using those ID’s

Process the raw data into a more suitable format

Save the data to the database

Push the data to Campaign Monitor

Since this involved a number of discrete steps, I chose to implement each step as a separate stage. That way, each step was easy to test in isolation, and it was easily reusable. As it turned out, this approach saved us because Facebook needed to approve this app (and ended up rejecting it - their documentation at the time wasn’t clear on implementing server-to-server apps, making it hard to meet their guidelines), so we needed an interim solution. I instead wrote an Artisan task for importing the file from a CSV, which involved the following steps:

Read the rows from the CSV file

Format the CSV data into the desired format

Save the data to the database

Push the data to Campaign Monitor

This meant that two of the existing steps could be reused, as is, without touching the code or tests. I just added two new classes to read the data and format the data, and the Artisan command, which simply called the various pipeline stages, and that was all. In this post, I’ll demonstrate how I implemented this.

While there is more than one implementation of this available, and it wouldn’t be hard to roll your own, I generally use the PHP League’s Pipeline package, since it’s simple, solid and well-tested. Let’s say our application has three steps:

Format the request data

Save the data

Push it to a third party service.

We therefore need to write a stage for each step in the process. Each one must be a callable, such as a closure, a callback, or a class that implements the __invoke() magic method. I usually go for the latter as it allows you to more easily inject dependencies into the stage via its constructor, making it easier to use and test. Here’s what our first stage might look like:

<?php

namespaceApp\Stages;

useIlluminate\Support\Collection;

classFormatData

{

publicfunction__invoke(Collection $data): Collection

{

return $data->map(function($item){

return [

'name' => $item->fullname,

'email' => $item->email

];

});

}

}

This class does nothing more than receive a collection, and format the data as expected. We could have it accept a request object instead, but I opted not to because I felt it made more sense to pass the data in as a collection so it’s not tied to an HTTP request. That way, it can also handle data passed through from a CSV file using an Artisan task, and the details of how it receives the data in the first place are deferred to the class that calls the pipeline in the first place. Note this stage also returns a collection, for handling by the next step:

<?php

namespaceApp\Stages;

useApp\Lead;

useIlluminate\Support\Collection;

classSaveData

{

publicfunction__invoke(Collection $data): Collection

{

return $data->map(function($item){

$lead = new Lead;

$lead->name = $item->name;

$lead->email = $item->email;

$lead->save();

return $lead;

}

}

}

This step saves each lead as an Eloquent model, and returns a collection of the saved models, which are passed to the final step:

<?php

namespaceApp\Stages;

useApp\Contracts\Services\MailingList;

useIlluminate\Support\Collection;

classAddDataToList

{

protected $list;

publicfunction__construct(MailingList $list)

{

$this->list = $list;

}

publicfunction__invoke(Collection $data)

{

return $data->each(function($item){

$this->list->add([

'name' => $item->name,

'email' => $item->email

]);

});

}

}

This step uses a wrapper class for a mailing service, which is passed through as a dependency in the constructor. The __invoke() method then loops through each Eloquent model and uses it to fetch the data, which is then added to the list. With our stages complete, we can now put them together in our controller:

As mentioned above, we extract the request data (assumed to be an array of data for a webhook), and convert it into a collection. Then, we put together our pipeline. Note that we use dependency injection to fetch the steps - feel free to use method or constructor injection as appropriate. We instantiate our pipeline, and call the pipe() method multiple times to add new stages.

Finally we pass the data through to our pipe for processing by calling the process() method, passing in the initial data. Note that we can wrap the whole thing in a try...catch statement to handle exceptions, so if something happens that would mean we would want to cease processing at that point, we can throw an exception in the stage and handle it outside the pipeline.

This means that our controller is kept very simple. It just gets the data as a collection, then puts the pipeline together and passes the data through. If we subsequently had to write an Artisan task to do something similar from the command line, we could fetch the data via a CSV reader class, and then pass it to the same pipeline. If we needed to change the format of the initial data, we could replace the FormatData class with a single separate class with very little trouble.

Another thing you can do with the League pipeline package, but I haven’t yet had the occasion to try, is use League\Pipeline\PipelineBuilder to build pipelines in a more dynamic fashion. You can make steps conditional, as in this example:

<?php

useLeague\Pipeline\PipelineBuilder;

$builder = (new PipelineBuilder)

->add(new FormatData);

if ($data['type'] = 'foo') {

$builder->add(new HandleFooType);

}

$builder->add(new SaveData);

$pipeline = $builder->build();

$pipeline->process($data);

The pipeline pattern isn’t appropriate for every situation, but for anything that involves a set of operations on the same data, it makes a lot of sense, and can make it easy to break larger operations into smaller steps that are easier to understand, test, and re-use.

Recent Posts

About me

I'm a web and mobile app developer based in Norfolk. My skillset includes Python, PHP and Javascript, and I have extensive experience working with CodeIgniter, Laravel, Zend Framework, Django, Phonegap and React.js.