Simply put, application profiling is the course of action taken to localize application problems. More specifically, as discussed below, we will be profiling those applications dealing directly with LAN/WAN communications, such as client/server, "phone-home" type programs.

In a Nutshell:

Latency. We all experience it at some time. The never ending lag between servers that slows down at certain times of the day (no offense West Coast, but it's a drag back east at 11am or so). However, latency can be caused by more than a heavy WAN load. In most of the articles I have read, the authors have centered on a heavy WAN load as the primary reason for heavy latency when communicating over vast distances. While this is normally the case, I don't think it should be so heavily centered upon. Take into consideration the fact that the larger pipe providers also have the larger budget to work with, meaning better equipment and long lasting stability.

On the LAN:

At 8am when the company personnel plop down at their desks, what's the very first thing done? Email. Consider this: You work for a corporation of approximately 2000 individuals onsite, another 10,000 world wide. Your primary email server is located within the IT portion of your building. You have fiber running from the T Drop to the Server Room, 10 main nodes running 100 Base-T and the rest switched off to 10 Base-T. Not a perfect scenario, but common none-the-less.

At 8am your time, you have 1500 employees showing up for work, of these, 750 have computers, whereas 500 use email all the time and all 500 check their email within 5 minutes of each other. This is not a problem if your software is written correctly and the mail server can handle the traffic (which it should). However, if your email program is written in such a way that it makes multiple requests, such as:

In this illustration, we show what *could* happen in the client/server match up. In most circumstances, there aren't as many request/answers as represented from one client/server communication session, however, the point is the same: For every request, there should be an answer. That being the case, poorly written software will make more requests than efficient, streamlined applications. The same holds true bidirectionally. The server software *could* make the client software seem less efficient than it really is. If the server asks more questions than what's needed to complete the task, the inefficiency may show on either end. And vice-versa.

Again, this is a mere example. Normally, email will not be a problem. The problem areas are normally found between hybrid client server applications, written for a specific need, to communicate proprietary data between a remote client and the server located somewhere across the WAN.

Across the WAN:

When dealing with a hybrid database program, written exclusively for Acme Widget Equipment (A.W.E.); The database contains an extensive list of all products contained in the 8 nationwide warehouses of A.W.E. The location of the database is centrally located at warehouse #4 in Lebanon, Kansas. All 2200 client stores nationwide use a proprietary client program to access the database via dialup and broadband connections.

At the database location, warehouse #4, to look up something on the database is the same in number of steps as it is everywhere else. However, less time is consumed because the actual database is onsite. When the clerk inputs in the first line for the query, the request is made for "widgets", the client program interprets this as a request to contact the database, and sends the message "request:widgets". First, let's analyze this before we go any further. Both sides of this should know that the first line will be "widgets", what's the name of the place? Acme WIDGET Equipment. Any request for information should be considered to be related to "widgets". Secondly, even if the request for "widgets" was correct, sending that tiny bit of data all that way just to see if the primary category of request was available is a large waste of time. The point is, without counting exact bits and bytes of data, cram as much information into the packets as you can, without exceeding your acceptable MRU/MTU sizes. A pre-configured, properly adjusted MRU/MTU time will allow proper communication over your network. Simple rules of thumb will get you close, a little monitoring will tweek your network traffic, under normal, static conditions to an acceptable level.

To further meet your goals, applications need to decrease, not the amount of data in the packets, but the total number of overall packets and the requests that are made. A server does not queue information based solely on the content of the information within the packet. Packets are also sent to the queue by arrival time. Therefore, if you send more information per packet, you decrease the number of packets waiting in the queue, which also decreases the time it takes for the server to process a larger number of queued packets.