Month: May 2015

With all the hype surrounding REST architecture. It may be tempting to think that it is the only component based architecture. However there are quite a number of other architectures that allow for remote cooperation.

Today we will see some of the most popular ones.

XML-RPC

XML-RPC stands for XML Remote Procedure Call. This standard uses XML to encode information about the behaviour to be called on the remote system.

XML is carried over HTTP and requires no special port.

The protocol is widely supported with native support in Python, PHP, Java and virtually all major languages.

As of date, it still powers the API for the popular publishing platform wordpress.

JSON-RPC

JSON-RPC is an acronym for JSON Remote Procedure Call. It is very similar to XML-RPC and in fact can be thought of as a port of XML-RPC.

Just like it’s sister protocol, it is widely supported. However it does shine in readability due to the fact that it encodes information in JSON (Javascript Object Notation) which is easily human readable and easy to parse for machines.

SOAP

SOAP (Simple Object Access Protocol) just like XML-RPC uses XML. SOAP enables a service to remotely trigger a function in a remote machine.

The protocol is web compliant and usually can be served through the normal port 80. However it can also be used with other protocols including SMTP, as such it can be used in a wider variety of environments and applications.

CORBA

CORBA(Common Object Request Broker Architecture) was developed by the Object Management Group. The system was designed to provide standard for interoperability of object based software components in a distributed environment.

Objects publish their interfaces using the Interface Definition Language (IDL) as defined in the CORBA specification.

This means that the protocol is not as web friendly and typically requires special ports open. Though it tends to be much faster than JSON-RPC or XML-RPC since it does not carry the burden of the verbosity that XML and JSON bring.

Pervasive Component System

PECOS (PErvasive COmponent System) is a component based model for embedded systems. It consists mainly of components communicating through ports. It provides an environment that supports the specification, composition, configuration checking, and deployment of embedded systems built from software components.

Depending on your situation, one of the above protocols might make the best business sense for your application. If however you can not make a business case for it or you are not sure what would work for you, stick to REST.

Now when you run ctags -R . only the files in your src directory will get reindexed.

This also implies that you manually need to reindex the vendor tags by running the command shown in 3 above. This only needs to happen when you update your composer file. Instructions on automating this part are in the Text Editors and CTags article.

If you enjoyed this, remember to signup for the weekly newsletter from the form on the right.

In the above interaction we have informed the user that they are not allowed to make this request. But also we have told intermediary machines/services not to retry the request as well as provided the next steps.

If you we’re using a browser a basic auth login window would pop up on this response.

Do not go crazy with the realm value. It is meant to be opaque, that is your backend system can change without necessitating change of this value.

Next on the conversation, assuming the client has passed the server authentication challenge, is actually fetching the resource.

The server in this case confirmed the authorization header. Requests without it or with improper ones would have returned a 401 as we had seen earlier.

Now intermediary machines know that the request is authorized and can repeat the request say in case of network failure.

The Vary header informs the client machine on what other headers influence the request. So clients know to repeat this request an authorization header is required.

However by default GET requests are cached. This is ordinarily a good thing, the server is saved from the extra load of fulfilling requests and the client experiences less latency. In cases of protected resources this may not be ideal, we should consider limiting the amount of time the client and any intermediaries store the content.

This can be done by looping in the machines to the conversation once more as such

The cache control then ensures that this intermediaries do not store the data for more than a specified amount of time, in this case 3600 seconds or 1 hour. The private directive ensures that the cache is not shared or served to other clients.