IP Loopback Webserver Restriction Rule

Hi,

Has anyone else run foul of the IP loopback restriction rule that shared server hosting companies seem to have been implementing?

I've been told by my hosting company that it's to stop programming loops from occurring, but IMHO, there are far more common ways to create coding loops etc. that could slow a server down than calling a program on another website on the same server, which then calls back to the same program on the originating website.

It seems to me that this one rule can end up preventing a lot of quite legitimate web programming from happening, unnecessarily.

For example, you may want to park a domain on a sub-domain, which could then call a module on the main domain to retrieve some database info for display, which you would quite legitimately need to use the main domain's full url for, instead of either a relative url or a full path url - not with this rule.

Or you might provide a service that other websites (which could be owned by you or other people) could call from code to retrieve information to display on their pages. With the loopback rule in place, if any of the calling websites reside on the same web server as the called website, which is entirely possible with the larger hosting companies, then this won't be allowed either.

Of course, with enough thought, work arounds can always be found, but it just seems to me that this is an unnecessary rule too far.

Does anyone know how this rule came into being (e.g. was it a theoretical problem dreamt up by an academician or a real problem experienced by a hosting company) and why the hosting companies seem to be so fixated on it, when, IMHO, it is a rule that can cause far more problems than it solves?

For example, you may want to park a domain on a sub-domain, which could then call a module on the main domain to retrieve some database info for display, which you would quite legitimately need to use the main domain's full url for, instead of either a relative url or a full path url - not with this rule.

I don't see this as a legitimate need to run an HTTP request. If you own both sites and both sites are on the same server, there are a huge number of vastly more efficient communication methods.

Or you might provide a service that other websites (which could be owned by you or other people) could call from code to retrieve information to display on their pages. With the loopback rule in place, if any of the calling websites reside on the same web server as the called website, which is entirely possible with the larger hosting companies, then this won't be allowed either.

I can see this being a potential problem, but again, if you own both sites there are vastly more efficient ways to communicate between them if they reside on the same server. The probability of any given third party's website being hosted on the same server as you is pretty small, but certainly not zero.

Does anyone know how this rule came into being (e.g. was it a theoretical problem dreamt up by an academician or a real problem experienced by a hosting company) and why the hosting companies seem to be so fixated on it, when, IMHO, it is a rule that can cause far more problems than it solves?

I have personally seen and debugged websites with callback loops in them. They take a massive toll on the server's resources, far more than a normal infinite loop would, and they are far more difficult to detect and debug. Additionally, except the API situation you mentioned above, I can't think of any other legitimate reasons to use a local network callback from a website.