I am using an Apache server running on Ubuntu to serve an in-house web application. When the server is only connected to the local network, it runs fast. When I let it accept incoming requests from the internet, it becomes very slow even though the access log shows that that it is not receiving any more requests than when it was only connected to the local network. The only difference between the two setups is that in the second I have the router forward port 80 to the server. What could be causing a slowdown like this, and how can I prevent it?

Edit: The server was slow even when there were zero clients connecting from the internet. In addition, this website has only 4 total users, and each page request serves one html page, one image, and one very small CSS file.

2 Answers
2

One possible cause could be if 'HostnameLookups' is enabled for the Apache server.
This would cause no delay on a local network, but could cause a slightly delay when access is happening from the outside, as it has to send more requests to nameservers.

Does your server slow for all users or only the users connecting from the internet? Are these users mobile users from across the internet by chance?

Yes, it is entirely possible that your internet users can be slowing down your entire host. Let's say your link speed to your server from the connected network is 30ms, but your connection from your external users is 300ms (on average). It will take ten times the length of time for handshakes to the server to take place, to send data, to receive data, etc... This interrupt service for the stack carries a price tag with it as it is hghest priority for the system, causing applications to wait while this high priority network activity is addressed. Keep the stack busy longer and you will keep the CPU busy longer servicing the stack and as a result have less CPU for all of your other users. The degrading effect can be ewspecially pronounced if you have a lot of local users accessing your site.

There are some things you can do:

Reduce your handshake events. If you have multiple style sheets, combine them. If you have multiple Javascript files, combine them.

Introduce compression to your servers. Smaller files take less time to transfer

Embrace effective client cache management. Take an active role in reducing the number of files which need to be transferred after the initial handshake. Even if you make your cache age a week you should experience a significant drop off in request levels for static content. Reduced requests=reduced overhead to your server.

Optimize your graphics. Do you need a 15 million color pallete in a graphic which only has 256 colors. Optimized palletes result in smaller file sizes

Consider a different graphic format, one which by default has smaller graphic sizes even before compression, such as PNG

Content Distribution? You have a couple of models you can go to, including one which would use a front end caching appliance to serve the majority of load. Take a look at Squid on the open source front (which you could deploy at a hosting provider as a front end) or any of the AKAMAI like services. Heck, even Google is getting into the content distribution game

In short, big_site+small_pipe is a recipe for poor performance for all when you have to spend so much time servicing network interrupts for the slow link users.