Can you really compare network speed with network bandwidth? Though interrelated, they are two very different things. While network speed measures the transfer rate of data from a source system to a destination system, network bandwidth is the amount of data that can be transferred per second (“the size of the pipe”). Combine the two, and you have what is known as network throughput.

One would assume that high-bandwidth networks would be fast and provide excellent throughput. In reality this is not always the case—not when you throw latency into the mix. Latency is the delay packets experience while moving through a network and typically is the culprit behind poor application response time and frustrated users. There is always some latency overhead in a network because of physics – specifically the speed of light. Light travels at about 200,000 kilometers per second through a single optical fiber, roughly two-thirds of the speed of light in a vacuum. This means, theoretically, that for every 100 km or 60 miles of distance a packet must traverse over a network, a half-millisecond (ms) is added to the one-way latency, or 1 ms to the data packets’ round-trip time.[i] Just as a train track runs over a causeway, so do fiber networks. For example, you could traverse 12 miles of cable to go 5 miles of point-to-point distance. And as the distance between two points grows, latency accumulates as well.

Many factors contribute to latency, including: the distance between two systems (such as over a wide area network), the number of hops (bridge, router or gateway points) along the way, large packet sizes (video files or encrypted data), jitter (the variance in time delay between packets) or network congestion (too many bits in the pipe). All of these scenarios can cause data packets to be dropped and then retransmitted, resulting in more latency. As an increasing number of data packets are retransmitted over long distances, they consume greater amounts of available bandwidth, thus degrading network performance.

Emerging 5G networks will help support the trend for real-time applications at the edge by reducing the latency from the device to the nearest antenna/tower. This is just one segment a packet will cross in the journey to a cloud or data center and providers are relying on proximity and interconnection to optimize the end-to-end workflow, including data center transmission and end-node processing.

The question that network architects must answer is: “How much latency can your company afford?” The answer is: “It depends on how much it impacts or differentiates your business.” For example:

Amazon saw that for every 100 ms of latency, they lost 1% in sales.

Google found that when an extra .5 seconds was added to the time it took to generate a search page, traffic dropped by 20%.

A stockbroker could lose $4 million in revenues per millisecond if an electronic trading platform is 5 ms late in completing the transaction.

Online advertisers have a window of approximately 100 ms to place real-time bids for ads in programmatic advertising systems before they lose out on potential revenue streams.

Unfortunately, when it comes to latency, by the time most companies realize they have a problem, it’s too late. They’ve lost prospective customers before the company’s web page has finished loading.

To protect your value investment in your network bandwidth, you need to reduce latency. Doing so requires a combination of proximity between your business and your critical counterparties and direct and secure, private interconnection.

Proximity is defined as the nearness in space, time or relationship. So how does proximity factor into the speed/bandwidth equation? Closing the physical distance between two points automatically reduces latency. Lowering latency increases available bandwidth and enables faster application response times. Since reducing latency is crucial for digital business, shortening the distance between users, applications, data, clouds, things, partners, customers and other participants will result in much greater network optimization and improved application performance.

Proximity to digital and business ecosystems in a colocation data center delivers other high-value benefits to your business. Directly and securely interconnecting your business with counterparties within globally distributed IT traffic exchange points at the digital edge accelerates digital transformation. Proximity to increasing amounts of data that can be shared with employees, partners and customers at the edge is also advantageous, as it provides faster and more accurate insights into business and customer requirements for greater optimization and revenue growth.

As companies travel the digital transformation continuum, the demand for high-speed, low-latency, private interconnection grows. The second volume of the Global Interconnection Index (the GXI), published by Equinix, tracks, measures and forecasts the explosive growth in digital business and its direct relation to the growing need for Interconnection Bandwidth[ii]. Installed Interconnection Bandwidth capacity is expected to reach more than 8,200 Tbps by 2021, a five-fold increase over five years with double-digit growth across all industries.

By distributing your IT infrastructures to the edge and leveraging the proximity to vital digital and business ecosystems, you can reduce costs and realize a greater ROI from your Interconnection Bandwidth capacity. You can also increase application speed, performance and ultimately customer satisfaction. We’ve seen customers on average save 70% on their networking costs, which frees up money for them to invest in their hybrid cloud infrastructure and digital transformation.

Learn more about how leading enterprises and service providers are leveraging private interconnection to build digital-ready infrastructures at the edge by reading the GXI Volume 2 report.

[ii] Interconnection Bandwidth is the total capacity provisioned to privately and directly exchange traffic with a diverse set of counterparties and providers at distributed IT exchange points inside carrier-neutral colocation data centers.