Sharing spices of www

Programming

About SSH Keys

SSH keys provide a more secure way of logging into a virtual private server with SSH than using a password alone. While a password can eventually be cracked with a brute force attack, SSH keys are nearly impossible to decipher by brute force alone. Generating a key pair provides you with two long string of characters: a public and a private key. You can place the public key on any server, and then unlock it by connecting to it with a client that already has the private key. When the two match up, the system unlocks without the need for a password. You can increase security even more by protecting the private key with a passphrase.

Step One—Create the RSA Key Pair

The first step is to create the key pair on the client machine (there is a good chance that this will just be your computer):

ssh-keygen -t rsa

Step Two—Store the Keys and Passphrase

Once you have entered the Gen Key command, you will get a few more questions:

Enter file in which to save the key (/home/demo/.ssh/id_rsa):

You can press enter here, saving the file to the user home (in this case, my example user is called demo).

Enter passphrase (empty for no passphrase):

It’s up to you whether you want to use a passphrase. Entering a passphrase does have its benefits: the security of a key, no matter how encrypted, still depends on the fact that it is not visible to anyone else. Should a passphrase-protected private key fall into an unauthorized users possession, they will be unable to log in to its associated accounts until they figure out the passphrase, buying the hacked user some extra time. The only downside, of course, to having a passphrase, is then having to type it in each time you use the Key Pair.

The authenticity of host '12.34.56.78 (12.34.56.78)' can't be established.
RSA key fingerprint is b1:2d:33:67:ce:35:4d:5f:f3:a8:cd:c0:c4:48:86:12.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '12.34.56.78' (RSA) to the list of known hosts.
user@12.34.56.78's password:
Now try logging into the machine, with "ssh 'user@12.34.56.78'", and check in:
~/.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.

Now you can go ahead and log into user@12.34.56.78 and you will not be prompted for a password. However, if you set a passphrase, you will be asked to enter the passphrase at that time (and whenever else you log in in the future).

Optional Step Four—Disable the Password for Root Login

Once you have copied your SSH keys unto your server and ensured that you can log in with the SSH keys alone, you can go ahead and restrict the root login to only be permitted via SSH keys.

In order to do this, open up the SSH config file:

sudo nano /etc/ssh/sshd_config

Within that file, find the line that includes PermitRootLogin and modify it to ensure that users can only connect with their SSH key:

I’ve been considering to use a CSS pre-processor like SASS, LESS, Stylus, etc, for a very long time. Every time someone asked me if I was using any of these tools/languages I would say that I’m kinda used to my current workflow and I don’t really see a reason for changing it since the problems those languages solves are not really the problems I’m having with CSS. Then yesterday I read two blog posts which made me reconsider my point of view so I decided to spend some time today studying the alternatives (once again) and porting some code to check the output and if the languages would really help to keep my code more organized/maintainable and/or if it would make the development process easier (also if they evolved on the past few years).

It takes a couple hours for an experienced developer to learn most of the features present on these languages (after you learn the first couple languages the next ones are way easier) but if you have no programming skills besides CSS/HTML and/or don’t know basic programming logic (loops, functions, scope) it will probably take a while, the command line is another barrier to CSS/HTML devs… But that isn’t the focus of this post, I’m going to talk specifically about overused/misused features. I will try to explain the most common problems I see every time someone shows a code sample or I see a project written using any of these languages/pre-processors.

Mixins

What are mixins?

Mixin is a common name used to describe that an object should copy all the properties from another object. To sum up a mixin is nothing more than an advanced copy and paste. “All” the famous pre-processors have some kind of mixin.

Dumb code duplication is dumb

Following the SCSS syntax (sass), a mixin can be described and used as:

Note that the properties are duplicated, which is very bad, file size will increase a lot and overall performance will also be degraded if not used with care. – Imagine that on a large project with thousands of lines of code, the amount of duplicated code will beunacceptable(by my standards).

This problem isn’t specific to SASS, it is also present on LESS and Stylus and any other language/pre-processor which supports the same feature, by having a new layer of abstraction the developer won’t realize he is creating code that has lots of duplication…ALWAYS gzip CSS and JS files!! gzip is really good at compressing duplicate data, so this problem might be irrelevant/nonexistent in production code, just beware that the generated CSS will get harder to maintain in case you or future devs for some reason decide to stop using a pre-processor and simply update the generated CSS (maybe they don’t have access to the source files or have no experience with a pre-processor).

Extend

LESS and Stylus doesn’t have support for anything similar to an extend, that’s why I picked SCSS (Sass) to write the code samples. A extend is like a “smarter mixin”, instead of copying and pasting the properties it will set the properties to multiple selectors at once.

Way closer to what a normal person would do manually… “Only” use mixins if you need to pass custom parameters. If you see yourself using the same mixin multiple times passing the same values than you should create a base “type” that is inherited by other selectors. – Compass(nice SASS framework) have a lot of mixins which I think should be base classesinstead.

Extend isn’t enough

Note that extend avoids code duplication but it also causes other problems, the amount of selectors can become an issue, if you @extend the same base class multiple times you may end up with a rule that have thousands of selectors, which won’t be good for performance either and can even make the browser to crash.

Another issue is that every class you create to be used only by @extend is going to be included on the compiled file (even if not used) which can be an issue in some cases (name collisions, file size) and makes this process not viable for creating a framework likecompass.

I really wish that SASS improves the way that @extend works (and that the other pre-processors also implements a similar feature) so we could create many base classes for code reuse but don’t necessarily export them. Something like:

Extend and mixins can be bad for maintenance

Contrary to the common knowledge, extending other classes and creating mixins can degrade maintenance. Since the place where you are using the properties is far awayfrom where the properties are being defined there is a bigger chance that you will change properties without noticing you are affecting multiple objects at once, or not realizing which elements are being affected by the changes. This is called “tight coupling”:

Tightly coupled systems tend to exhibit the following developmental characteristics, which are often seen as disadvantages:

A change in one module usually forces a ripple effect of changes in other modules.

Assembly of modules might require more effort and/or time due to the increased inter-module dependency.

A particular module might be harder to reuse and/or test because dependent modules must be included.

I prefer to group all my selectors by proximity, that way I make sure that when someone update a selector/property they know exactly what is going to be affected by these changes, even if that imply some code duplication.

Avoid editing base classes as much as possible, follow the “open/closed principle” as much as you can. (Augment base classes, do not edit them).

Nesting

Another feature that a lot of people consider useful is selector nesting, so instead of repeating the selectors many times you simply nest the rules that should be applied to child elements.

By abstracting the selectors it becomes very easy to be over specific and specificity is hard to handle and a bad thing for maintainability. I’ve been following the OOCSSapproach and I don’t need child selectors that much so I don’t think that typing the same selector multiple times is a real problem (specially with good code completion), I know a lot of people don’t agree with that approach but for the kind of stuff I’m doing it’s been working pretty well.

Call me a weirdo but I also find nested code harder to read – since I’ve been coding non-nested CSS for more than 7 years.

Sum up

These tools have some cool features like the helper functions for color manipulation, variables, math helpers, logical operators, etc, but I honestly don’t think it would improve my workflow that much.

My feeling for these pre-processors is the same feeling I have for CoffeeScript, nice syntax and features but too much overhead for no “real” gain. Syntax isn’t the real problem in JavaScript for me the same way that it isn’t the real problem in CSS (and most of the languages). You still need to understand how the box-model works, specificity, cascading, selectors, floats, browser quirks, etc… you are just adding another layer of abstraction between you and the interpreted stylesheet, adding yet another barrier for future developers and increasing the chance of over-engineering. Markup may become simpler (with less classes/ids) but it comes with many drawbacks.

For me the greatest problem are developers that code CSS without the knowledge required to build a maintainable and scalable structure. A stylesheet full of mixins, if/else, loops, variables, functions, etc, will be as hard to maintain as a bloated hand-crafted stylesheet, if not harder. Developers have an inherited desire to be “clever” and that is usually a red flag.

“Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” – Brian Kernighan

Mixins are popular nowadays because of browser vendor prefixes, the real problem isn’t that CSS doesn’t support mixins or variables natively but that we have to write an absurd amount of vendor prefixes for no real reason since most of the implementations are similar and most of the features are only “cosmetic”. The real issue isn’t the language syntax, but the way that browsers are adding new features and people using them before they are implemented broadly (without prefixes). – This could be handled by a pre-processor that only adds the vendor prefixes (without the need of mixins or a special language) like cssprefixer. Try to find what is the real problem you are trying to solve and think about different solutions.

“It’s time to abolish all vendor prefixes. They’ve become solutions for which there is no problem, and they are actively harming web standards.” – Peter-Paul Koch

I’ve been following the OOCSS approach on most of my latest projects, and probably will keep doing it until I find a better approach. For the kind of stuff I’m coding it is more important to be able to code things fast and make updates during the development phase than to maintain/evolve the project over many months/years. I find it very unlikely to make drastic design changes without updating the markup, on the last 100 projects I coded it probably only happened 2 or 3 times. – css zen garden is a cool concept but not really that practical – Features like desaturate(@red, 10%) are cool but usually designers already provides me a color palette to be used on the whole site and I don’t duplicate the same value that much, if I do duplicate it everywhere than I can simply do a “find and replace” inside all the CSS files and call it a day, by using a function that generates a color (which you have no idea which value it will be) you can’t simply do a find and replace since you don’t know what is the value you are looking for on the source code – I prefer to simply use a color picker…

I know my experience is very different from most people so that’s why my approach is also different, your mileage may vary… If I ever need to use any of these tools it won’t be an issue (I have no strong barrier against them), I just don’t think they will save me that much time right now that would outweigh the drawbacks. Pick the tools based on the project and your workflow, it isn’t because I listed a couple issues that you should discard using a pre-processor, for many cases it would be an awesome way of generating stylesheets, just think about the drawbacks and be responsible.

“With great power comes great responsibility.” – Uncle Ben to Peter Parker

PS: I love CSS, for me it’s one of the most rewarding tasks on a website development, it’s like solving a hard puzzle…

Background Info on Regular Expressions

This is what Wikipedia has to say about them:

In computing, regular expressions provide a concise and flexible means for identifying strings of text of interest, such as particular characters, words, or patterns of characters. Regular expressions (abbreviated as regex or regexp, with plural forms regexes, regexps, or regexen) are written in a formal language that can be interpreted by a regular expression processor, a program that either serves as a parser generator or examines text and identifies parts that match the provided specification.

Now, that doesn’t really tell me much about the actual patterns. The regexes I’ll be going over today contains characters such as \w, \s, \1, and many others that represent something totally different from what they look like.

If you’d like to learn a little about regular expressions before you continue reading this article, I’d suggest watching the Regular Expressions for Dummies screencast series.

The eight regular expressions we’ll be going over today will allow you to match a(n): username, password, email, hex value (like #fff or #000), slug, URL, IP address, and an HTML tag. As the list goes down, the regular expressions get more and more confusing. The pictures for each regex in the beginning are easy to follow, but the last four are more easily understood by reading the explanation.

The key thing to remember about regular expressions is that they are almost read forwards and backwards at the same time. This sentence will make more sense when we talk about matching HTML tags.

Note: The delimiters used in the regular expressions are forward slashes, “/”. Each pattern begins and ends with a delimiter. If a forward slash appears in a regex, we must escape it with a backslash: “\/”.

1. Matching a Username

Pattern:

/^[a-z0-9_-]{3,16}$/

Description:

We begin by telling the parser to find the beginning of the string (^), followed by any lowercase letter (a-z), number (0-9), an underscore, or a hyphen. Next, {3,16} makes sure that are at least 3 of those characters, but no more than 16. Finally, we want the end of the string ($).

String that matches:

my-us3r_n4m3

String that doesn’t match:

th1s1s-wayt00_l0ngt0beausername (too long)

2. Matching a Password

Pattern:

/^[a-z0-9_-]{6,18}$/

Description:

Matching a password is very similar to matching a username. The only difference is that instead of 3 to 16 letters, numbers, underscores, or hyphens, we want 6 to 18 of them ({6,18}).

String that matches:

myp4ssw0rd

String that doesn’t match:

mypa$$w0rd (contains a dollar sign)

3. Matching a Hex Value

Pattern:

/^#?([a-f0-9]{6}|[a-f0-9]{3})$/

Description:

We begin by telling the parser to find the beginning of the string (^). Next, a number sign is optional because it is followed a question mark. The question mark tells the parser that the preceding character — in this case a number sign — is optional, but to be “greedy” and capture it if it’s there. Next, inside the first group (first group of parentheses), we can have two different situations. The first is any lowercase letter between a and f or a number six times. The vertical bar tells us that we can also have three lowercase letters between a and f or numbers instead. Finally, we want the end of the string ($).

The reason that I put the six character before is that parser will capture a hex value like #ffffff. If I had reversed it so that the three characters came first, the parser would only pick up #fff and not the other three f’s.

String that matches:

#a3c113

String that doesn’t match:

#4d82h4 (contains the letter h)

4. Matching a Slug

Pattern:

/^[a-z0-9-]+$/

Description:

You will be using this regex if you ever have to work with mod_rewrite and pretty URL’s. We begin by telling the parser to find the beginning of the string (^), followed by one or more (the plus sign) letters, numbers, or hyphens. Finally, we want the end of the string ($).

String that matches:

my-title-here

String that doesn’t match:

my_title_here (contains underscores)

5. Matching an Email

Pattern:

/^([a-z0-9_\.-]+)@([\da-z\.-]+)\.([a-z\.]{2,6})$/

Description:

We begin by telling the parser to find the beginning of the string (^). Inside the first group, we match one or more lowercase letters, numbers, underscores, dots, or hyphens. I have escaped the dot because a non-escaped dot means any character. Directly after that, there must be an at sign. Next is the domain name which must be: one or more lowercase letters, numbers, underscores, dots, or hyphens. Then another (escaped) dot, with the extension being two to six letters or dots. I have 2 to 6 because of the country specific TLD’s (.ny.us or .co.uk). Finally, we want the end of the string ($).

String that matches:

john@doe.com

String that doesn’t match:

john@doe.something (TLD is too long)

6. Matching a URL

Pattern:

/^(https?:\/\/)?([\da-z\.-]+)\.([a-z\.]{2,6})([\/\w \.-]*)*\/?$/

Description:

This regex is almost like taking the ending part of the above regex, slapping it between “http://” and some file structure at the end. It sounds a lot simpler than it really is. To start off, we search for the beginning of the line with the caret.

The first capturing group is all option. It allows the URL to begin with “http://”, “https://”, or neither of them. I have a question mark after the s to allow URL’s that have http or https. In order to make this entire group optional, I just added a question mark to the end of it.

Next is the domain name: one or more numbers, letters, dots, or hypens followed by another dot then two to six letters or dots. The following section is the optional files and directories. Inside the group, we want to match any number of forward slashes, letters, numbers, underscores, spaces, dots, or hyphens. Then we say that this group can be matched as many times as we want. Pretty much this allows multiple directories to be matched along with a file at the end. I have used the star instead of the question mark because the star says zero or more, not zero or one. If a question mark was to be used there, only one file/directory would be able to be matched.

Then a trailing slash is matched, but it can be optional. Finally we end with the end of the line.

7. Matching an IP Address

Pattern:

Description:

Now, I’m not going to lie, I didn’t write this regex; I got it from here. Now, that doesn’t mean that I can’t rip it apart character for character.

The first capture group really isn’t a captured group because

?:

was placed inside which tells the parser to not capture this group (more on this in the last regex). We also want this non-captured group to be repeated three times — the {3} at the end of the group. This group contains another group, a subgroup, and a literal dot. The parser looks for a match in the subgroup then a dot to move on.

The subgroup is also another non-capture group. It’s just a bunch of character sets (things inside brackets): the string “25″ followed by a number between 0 and 5; or the string “2″ and a number between 0 and 4 and any number; or an optional zero or one followed by two numbers, with the second being optional.

After we match three of those, it’s onto the next non-capturing group. This one wants: the string “25″ followed by a number between 0 and 5; or the string “2″ with a number between 0 and 4 and another number at the end; or an optional zero or one followed by two numbers, with the second being optional.

We end this confusing regex with the end of the string.

String that matches:

73.60.124.136 (no, that is not my IP address :P)

String that doesn’t match:

256.60.124.136 (the first group must be “25″ and a number between zero and five)

8. Matching an HTML Tag

Pattern:

Description:

One of the more useful regexes on the list. It matches any HTML tag with the content inside. As usually, we begin with the start of the line.

First comes the tag’s name. It must be one or more letters long. This is the first capture group, it comes in handy when we have to grab the closing tag. The next thing are the tag’s attributes. This is any character but a greater than sign (>). Since this is optional, but I want to match more than one character, the star is used. The plus sign makes up the attribute and value, and the star says as many attributes as you want.

Next comes the third non-capture group. Inside, it will contain either a greater than sign, some content, and a closing tag; or some spaces, a forward slash, and a greater than sign. The first option looks for a greater than sign followed by any number of characters, and the closing tag. \1 is used which represents the content that was captured in the first capturing group. In this case it was the tag’s name. Now, if that couldn’t be matched we want to look for a self closing tag (like an img, br, or hr tag). This needs to have one or more spaces followed by “/>”.

Most modern PHP applications access important user information and store them in a database. For example, web app might have a registration system for new users. But how should you store usernames and passwords in the database?

You must always think about security. If passwords are stored in plain text, what happens if an attacker gains access to your database? He can easily read all of the users’ passwords. That’s why we use a technique called password hashing to prevent attackers from getting user passwords.

In this article you’ll learn how to store the passwords securely in the database so that, even if your database falls into wrong hands, no damage will be done.

What Is Password Hashing

Hashing is not a new concept. It has been in practical use for quite a long time. To understand hashing, think about fingerprints. Every person has a unique fingerprint. Similarly, each string can have a unique fixed-size “digital fingerprint” called a hash. For a good hashing algorithm, it’s very rare that two different strings will have same hash (called a collision).

The most important feature of hashes is that the hash generation process is one way. The one way property indicates that it’s impossible to recover the original text from its hash. Therefore password hashing perfectly suits our need for secure password storage. Instead of storing a password in plain text, we can hash the password and store the resulting hash. If an attacker later gains access to the database, he can’t recover original password from the hash.

But what about authentication? You can no longer compare the password entered by user in a login form with the hash stored in the database. You need to hash the login password and compare the result with the hash stored in the database.

How Hashing Is Done In PHP

There are different algorithms for generating hash of a text. The most popular ones are: MD5, SHA1, and Bcrypt. Each of these algorithms are supported in PHP. You really should be using Bcrypt, but I’ll present the other alternatives first because they help illustrate what you need to do to protect your passwords.

Let’s start with PHP’s md5() function which can hash passwords according to the MD5 hashing algorithm. The following example demonstrates the registration process:

In the above example,md5() creates a 128-bit hash out of the given password. It’s also possible to use sha1() instead of md5() which produces a 160-bit hash (which means there’s less chance of a collision).

If you generate an MD5 hash of the string “MySecretPassword” and output it to the browser, it will look like the following:

7315a012ecad1059a3634f8be1347846

“MySecretPassword” when hashed with SHA1 will produce the following output:

952729c61cab7e01e4b5f5ba7b95830d2075f74b

Never hash a password two times. It does not add extra security; rather, it makes the hash weak and inefficient. For example, don’t try to create an MD5 hash of a password and then provide it as input to sha1(). It simply increases the probability of hash collisions.

Taking Password Hashing to the Next Level

Researchers have found several flaws in the SHA1 and MD5 algorithms. That’s why modern PHP applications shouldn’t use these two hash functions. Rather, they should use hash algorithms from the SHA2 family like SHA256 or SHA512. As the name suggest, they produce hashes of length 256 and 512 bits. They are newer and considerably stronger than MD5. As the number of bits increase, the probability of a collision decreases. Either of the above two is more than sufficient to keep your application secure.

The following code shows how to use SHA256 hashing in PHP:

<?php
$password = hash("sha256", $password);

PHP offers the built-in function hash(). The first argument to the function is the algorithm name (you can pass algorithm names like sha256, sha512, md5, sha1, and many others). The second argument is the string that will be hashed. The result it returns is the hashed string.

Paranoia is Good – Using Salts for Added Security

Being paranoid about the security of your system is good. So, let’s consider another case here. You have hashed the user’s password and stored it in the database table. Even if an attacker gets access to our database, he won’t be able to determine the original password. But what if he checks all the password hashes with one another and finds some of them to be same? What does this indicate?

We already know two strings will have same hash only if both of them are same (in the absence of any collisions). So if the attacker sees same hashes, he can infer that the passwords for those accounts are same. If he already knows the password to one those accounts, then he can simply use that and gain access to all of the accounts with the same password.

The solution is to use a random number while generating the hash, referred to as salt. Each time we generate hash of a password, we use a random salt. You just need to generate a random number of a particular length and append it to the plain text password, and then hash it. In this way, even if passwords for two accounts are same, the generated hashes will not be same because the salts used in both cases are different.

To create a salt we use the uniqid()function. The first argument is rand() which generates a random integer. The second argument is true to increase the chance of the generated number being unique.

To authenticate the user, you must store the salt used for hashing the password (It’s possible to store the salt in another column in same table where you have username and password stored). When the user tries to login, append the salt to the entered password and then hash it with the hash function.

Going Even Further: Using BCrypt

Learning about MD5/SHA1 and salts are good to gain an understanding of what’s needed for secure storage. But for implementing a serious security plan, Bcrypt is the hashing technique you should be using.

Bcrypt is based on the Blowfish symmetric block cipher. Ideally, we want a hashing algorithm to work very slowly for an attacker’s automated cracking attempts but not too slow that we can’t use it in real world applications. With Bcrypt, we can make the algorithm work n times slower while adjusting n in such a way that it won’t exceed our resources. Also, if you use Bcrypt then there is no need to store your salts in the database.

Let’s have a look at an example that uses the crypt() function to hash the password:

The above function checks whether the Blowfish cipher is available through the CRYPT_BLOWFISH constant. If so, then we generate a random salt. The requirement is that the salt starts with “$2a$” (or “$2y$” see this notice on php.net) to indicate the algorithm is Blowfish, followed by a two digit number from 4 to 31. This number is a cost parameter that makes brute force attacks take longer. Then we append an alphanumeric string containing 22 characters as the main portion of our salt. The alphanumeric string can also include ‘.’ and ‘/’.

Summary

An important security measure to follow is always hash your users’ passwords before storing them in your database, and use modern hashing algorithms like Bcrypt, sha256, or sha512 to do so. When you do, even if an attacker gains access to your database, he won’t have the actual passwords of your users. This article explains the principles behind hashing, salts, and Bcrypt.

Dependency injection is the answer to more maintainable, testable, modular code.

Every project has dependencies and the more complex the project is the more dependencies it will most likely have. The most common dependency in today’s web application is the database and chances are if it goes down the application will all together stop working. That is because the code is dependent on the database server, which is perfectly fine. Not using a database server because it could one day crash is a bit ridiculous. Even though the dependency has its flaws, it still makes life for the code, and thus the developer, a lot easier.

The problem with most dependencies is the way that code handles and interacts with them. What I really mean is the problem is in the code and not the dependency. If you are not using dependency injection, chances are your code looks something like this:

The book object now is given full access to the database once it is constructed. That is good, the book needs to be able to talk to the database and pull data. The problem lies in the way the book gained its access. In order for the book to be able to talk to the database the code must have an outside variable named $database, or worse, it must have a singleton pattern class (registry) object containing a record for a database connection. If neither of these exist the book fails, making the code far from modular.

This raises the question, how exactly does the book get access to the database? This is where inversion of control comes in.

In Hollywood a struggling actor does not call up a director and ask for a role in his next film. No, the opposite happens. The director calls up the actor and asks him to play the main character in his next movie. Objects are struggling actors, they do not get to pick the roles they play, the director needs to tell them what to do. Objects do not get to pick the outside systems they interact with, instead, the outside systems are given to the objects. Remember this as Inversion of Control.

This is how a developer tells his objects how to interact with outside dependencies:

This code allows for the book class to be used in any web application. The Book is no longer dependent on anything other than the developer supplying a database shortly after object creation.

This is, at its finest, dependency injection. There are two common ways of injecting dependencies. The first being constructor injection and the second being setter injection. Constructor injection involves passing all of the dependencies as arguments when creating a new object. The code would look something like this:

$book = new Book($databaseConnection, $configFile);

There are some issues with constructor injection. First, the more dependencies a class has the messier the constructor becomes. Passing in three or four dependencies all in a constructor is extremely hard to read. Also, the less work a constructor does the better.

This leaves us with our second method of dependency injection, setter injection. A dependency is set by using a public method inside the class.

This is easy to follow, but it leads writing more and more code for your application. When a book object is created three lines of code are required. If we have to inject another dependency, a 4th line of code is now needed. This gets messy quickly.

The answer to this problem is a container, which is class that is designed to hold, create, and inject all the dependencies needed for an application or class. Here is an example:

All dependencies should be registered into the container object during run time. This object is now the gateway that all dependencies must pass through before they can interact with any classes. This is the dependency container.

The reason makeBook is a public static function is for ease of use and global access. When I started this article off I made a reference to the singleton pattern and global access being a poor choices of code. They are, for the most part. It is bad design when they control access, but it is perfectly ok when they control creation. The makeBook function is only a shortcut for creation. There is no dependency what-so-ever between the book class and the container class. The container class exists so we can contain our dependencies in one location and automatically inject those dependencies with one line of code creation.

The container class removes all of the extra work of dependency injection.

Before injection:

$book = new Book();

And now:

$book = Container::makeBook();

Hardly any extra work, but tons of extra benefits.

When test code is run, specifically unit tests, the goal is to see if a method of a class is working correctly. Since the book class requires database access to read the book data it adds a whole layer of complexity. The test has to acquire a database connection, pull data, and test it. All of a sudden the test is no longer testing a single method in the book class, it is now also testing database. If the database is offline, the test would fail. This is far from the goal a unit test.

A way of dealing with this is just using a different database dependency for the unit tests. When the test suite starts up a dummy database is injected into the book. The dummy database will always have the data the developer expects it to have. If a live database was used in a unit test the data could potentially change causing tests to unnecessarily fail.

The code is more modular because it can dropped into any other web application. Create the book object and inject a database connection with $book->setDatabase(). It does not matter if the database is in Registry::Database, $database, or $someRandomDatabaseVarible. As long as there is a database connection the book will work inside any system.

The code is more maintainable because each object is given exactly what it needs. If separate database connections are required between different instances of the same class then there no extra code needed inside the class what-so-ever. Give book1 access to database1 and book2 access to database2.

As you can see in the above example, The pagination object contains "next_url" which is a link to the next set of pictures. So what if you don't want a link to the next set but you want one big, bulk response of all the pictures? Where create an array of responses of course!

As you can see in the above snippet (reading the comments) we take the set of parameters, use curl to send them to the endpoint, receive and decode our response and return it. Simple. What we need to do is leverage this function, with another function that will keep looping as long as it finds a "next_url" object in the currently received JSON response.

And there you have it. The function above takes the url (endpoint for the API), defines the $results variable as an empty array that we will add to for each page and the $gotAllResults variable, which will be set to TRUE once there are no more pages(or "next_url" objects). Our while statement first tests that $gotAllResults is FALSE then performs the _apiCall function seen further above, receives the response and adds it to the array. The if state checks the current response to see if the "next_url" object exists, if it no longer does(meaning we have reached the end of the responses) it will set $gotAllResponses to TRUE, cancel out the while statement and return our array of results.

Now that we have our big array of JSON responses, parsing it can be tricky. Below is a snippet similar to what I uses to parse and display my API call, showing the image and some information about it...

You must have heard of Push technology being used in the tech industry since a long time. And with the web being a trend, and an enourmous amount of userbase accessing websites, how do you keep the data realtime without effecting much of your network and database resource.
Push technology is one of the solutions in such cases. Push technology is not really a form of any specific tech, but a concept.
Example: Facebook live notifications or Gmail incoming mail notifications
You must have thought sometime of how do you get the notification ticks so quickly as soon as your friend pokes or likes your post.
One solution is, firing a “check my notifications” every few second of interval. That might just work good in your application, if it’s small enough or you have good cash for the server/network headaches.
The other better solution here uses comets / forever frame / polling.

Well, the forever frame technique is quite old, but just what you might need in your next app.
A common method of doing such notifications is to poll a script on the server (using ajax) on a given interval (perhaps every few seconds), to check if something has happened. However, this can be pretty network intensive, and you often make pointless requests, because nothing has happened

A common method of doing such notifications is to poll a script on the server (using ajax) on a given interval (perhaps every few seconds), to check if something has happened. However, this can be pretty network intensive, and you often make pointless requests, because nothing has happened.

The way facebook does it is using the comet approach, rather than polling on an interval, as soon as one poll completes, it issues another one. However, each request to the script on the server has an extremely long timeout, and the server only responds to the request once something has happened. You can see this happening if you bring up Firebug’s Console tab while on facebook, with requests to a script possibly taking minutes. It is quite ingenious really, since this method cuts down immediately on both the number of requests, and how often you have to send them. You effectively now have an event framework that allows the server to ‘fire’ events.

Behind this, in terms of the actual content returned from those polls, it’s a JSON response, with what appears to be a list of events, and info about them. It’s minified though, so is a bit hard to read.

In terms of the actual technology, ajax is the way to go here, because you can control request timeouts, and many other things. I’d reccommend (Stack overflow cliche here) using jQuery to do the ajax, it’ll take a lot of the cross-compability problems away. In terms of php, you could simply poll an event log database table in your php script, and only return to the client when something happens? There are, I expect, many ways of implementing this.

Implementing:

Server Side:

There appear to be a few implementations of comet libraries in php, but to be honest, it really is very simple, something perhaps like the following pseudocode:

The has_event_happened function would just check if anything had happened in an events table or something, and then the get_events function would return a list of the new rows in the table? Depends on the context of the problem really.

In my hunt for the perfect clean url (smart url, slug, permalink, whatever) generator I’ve always slipped in some exception or bug that made the function a piece of junk. But I recently found an easy solution I hope I could call “definitive”.

Clean url generators are crucial for search engine optimization or just to tidy up the site navigation. They are even more important if you work with international characters, accented vowels /à, è, ì, .../, cedilla /ç/, dieresis /ë/, tilde /ñ/ and so on.

First of all we need to strip all special characters and punctuation away. This is easily accomplished with something like:

With our toAscii function we can convert a string like “Hi! I’m the title of your page!” to hi-im-the-title-of-your-page. This is nice, but what happens with a title like “A piñata is a paper container filled with candy”?

The result will be a-piata-is-a-paper-container-filled-with-candy, which is not cool. We need to convert all special characters to the closest ascii character equivalent.

I always work with UTF-8 but you can obviously use any character encoding recognized by your system. Thepiñata text is now transliterated into a-pinata-is-a-paper-container-filled-with-candy. Lovable.

If they are not Spanish, users will hardly search your site for the word piñata, they will most likely search forpinata. So you may want to store both versions in your database. You may have a title field with the actual displayed text and a slug field containing its ascii version counterpart.

We can add a delimiter parameter to our function so we can use it to generate both clean urls and slugs (in newspaper editing, a slug is a short name given to an article that is in production, source).

There’s one more thing. The string “I’ll be back!” is converted to ill-be-back. This may or may not be an issue depending on your application. If you use the function to generate a searchable slug for example, looking for “ill” would return the famous Terminator quote that probably isn’t what you wanted.

You can now pass custom delimiters to the function. Calling toAscii("I'll be back!", "'") you’ll get i-ll-be-back. Also note that the apostrophe is replaced before the string is converted to ascii as character encoding conversion may lead to weird results, for example é is converted to 'e, so the apostrophe needs to be parsed before the string is mangled by iconv.