Sails.js is an MVC Framework for node.js with many features directly out of the box that make prototyping web applications a lot faster and easier. One of these features is called Blueprints and automatically generates a REST API based on your Model definition.

ng-admin is an Angular.js Application that can be connected to any REST API and, with some configuration, presents a fully-featured CRUD interface for all entities accessible through the API.

Bringing the two together allows us to add an admin interface to an API with minimal programming effort. However, the two don’t work together perfectly out of the box. Most features already work, but to get everything up and running properly, especially list views with pagination, some minor changes must be made in both projects.

Sorting and Limiting for List Views

ng-admin adds some parameters to its GET Requests that are supposed to tell Sails.js how many objects to return and how many to skip. However, Sails.js does not understand these parameters and therefore list views do not work initially. But, as described in the chapter “Customizing the API Mapping” in the ng-admin documentation, it is possible to change the way it builds up its request.

This can be done in the ng-admin.js file, where we set up your ng-admin application anyway. For Sails.js, the following code needs to be added after the call to admin.config:

How this modifies the parameters is pretty obvious, but how when or why this is called not so much. It seems to be working with Angular.js dependency injection, but so far I haven’t worked enough with Angular.js to be sure.

Pagination in List Views

So far, ng-admin will only display a single page of results inside every list view. That happens because it has no idea how many items actually exist behind the API. It wants to read this information from a special HTTP-Header called X-Total-Count. Sails.js does not set this header, so pagination does not work.

Now, we could go and add logic to every Controller to count the total number of objects in our database and set the X-Total-Count Header. While it would work, it would be kind of dumb, because we would have to copy and maintain the same code in each and every controller that is part of our API.

Fortunately, Sails.js allows us to override parts of blueprints with custom logic. We need to override the find action with a version that counts the number of entries and sends it to ng-admin inside the X-Total-Count Header. To do so, we create a new directory inside our api directory, called blueprints. Inside this directory, we create a file called find.js with the following content:

This will now get called whenever ng-admin requests data for a list view and sends the proper header along with the data. The code is heavily based on the original implementation of find.js and some code I found in a Gist on Github. I only updated it a bit to work with a newer version of Sails.js (0.12 in this case).

Because the code above requires lodash and we cannot rely on lodash automatically shipping with Sails.js, we need to install it via npm before everything works:

npm require lodash --save

With these 2 changes in place, our ng-admin application should now be able to nicely display list views for our automatically generated Sails.js API.

I was recently building a long running PHP (CLI) job that exported large amounts of data from a MySQL database using the Doctrine DBAL. One problem I noticed was that the more data it exported, the more memory it consumed. Which was odd because it only reads from the database, accumulates new objects, sends them off to another service and then discards them. Analyzing memory consumption is not that easy, especially with larger programs that shuffle a lot of data around.

A simple yet effective method is to output the current memory usage at different places in the code:

echo memory_get_usage(true) . PHP_EOL;

This confirmed that indeed the memory was increasing over the runtime of the job. I suspected Doctrine was the culprit because the Entity Manager keeps all Entities it fetches from the Database in memory in case they are needed later. This is called the Unit of Work. I added some more debug output that periodically prints the class names of the entities Doctrine has under management and how many of each:

Some Entities indeed had close to a million instances in Doctrine. I knew at which point I wouldn’t need some of them any more, so I decided to remove all Entities of specific types from Doctrines unit of work:

$doctrine->clear('Path\To\Namespace\Entity');

This did decrease the memory consumption but not nearly as much as I had hoped or expected. I remembered that it is possible to get all SQL Queries that Doctrine has ever executed somehow. With lots of big entities and elaborate relationships between them, this could quickly amount to several millions of very big SQL queries, each stored as a string. It is of course possible to disable the SQL Logger:

$doctrine->getConnection()->getConfiguration()->setSQLLogger(null);

It turned out that this solved my memory problem. The SQL Logger seemed to indeed consume lots of RAM. An additional nice side-effect of this was that the entire job now ran about 2x faster than before. So disabling SQL Logging in Doctrine seems to generally be a good idea. You can still leave it enabled when the software runs in debug mode or with a verbose flag, if you need it for debugging purposes.

After setting up Alpine Linux to run docker containers and fixing a whole bunch of issues I encountered, I now came to the point where I tried to run a node.js app inside a Docker Container that runs on an Alpine Linux Host. But node immediately exited with a Segmentation Fault. After some experimenting, I went and just started the official node docker container in the latest version and started node inside. It still didn’t work.

First, some network config changes done by docker, then docker complains about missing support for nested cgroups, then some more network config changes before node crashes with a segmentation fault. And eventually some error messages because GRSEC prevents node from writing a core dump.

As with all Linux distributions, if I can’t get them to run properly after a few hours of trying, I throw them away and won’t touch them again for a few years. Now I have added Alpine Linux to that list. Fortunately, there are enough alternatives out there.

But since I’m not really interested of trying out many different Linux distributions, I’ll just go back to Ubuntu. Yes, it is still a ~600 MB download instead of an 80 MB one but at least it works most of the time.