right? So from class 10 yrs ago I remember that bandwidth is technically not a speed rating. it's more akin to how much water can fit through a hose rather than how fast the water is flowing. ok, that makes sense and I accepted that.

but the other day I thought of a question i should have had back then, what is the speed? would it be the transmission speed of electrons through the medium? would it be something else I haven't thought of?

Bandwidth is peak or maximum bit rate and speed is actual bit rate and it varies during transmission. They are both measured in bits per second and bandwidth is actually maximum speed. Confusion comes from signal processing where bandwidth means frequency range. This is a classic example when a term is reused in different context.

Bandwidth describes how much data can be transferred. Latency describes the time for packets to make a round trip or one-way trip (depends on the packet type and the test) from where you are to any given server.

So, bandwidth is how much, latency is how fast.

Bandwidth is measured in bits or bytes (including various SI prefixes, latency is measured in time (milliseconds, typically.)

This is true for transmission initiation, after that it's all about how much bandwidth can you actually use (is the transfer speed limited by server's available bandwidth for one user at that time or is it limited by user's bandwidth)

Bandwidth describes how much data can be transferred. Latency describes the time for packets to make a round trip or one-way trip (depends on the packet type and the test) from where you are to any given server.

So, bandwidth is how much, latency is how fast.

Bandwidth is measured in bits or bytes (including various SI prefixes, latency is measured in time (milliseconds, typically.)

This. Your best parallel would be satellite Internet Service. They can have decent bandwidth of up to 3.0 MB/s but have terrible speed. It takes about 6.2 seconds for transmission from the ground to reach a satellite and back. Once you add in time it takes for signal to reach the transmitter, convert the signal, received, decode, etc. It ends up giving you a latency averaging around 850 ms. Imaging trying to play an FPS with a latency of 850 ms.

While you could have the bandwidth needed, the speed will hinder you. People get this confused because when you download something, you are giving bandwidth and time. This is because latency doesn't play a factor in large files as the data will be continuous. It is the small burst of data were speed comes into play.

I think this can get confusing because the original definition was-
"(1) A range within a band of frequencies or wavelengths.", which made it technically incorrect for digital throughput.

Then they added the definition-
"(2) The amount of data that can be transmitted in a fixed amount of time. For digital devices, the bandwidth is usually expressed in bits per second(bps) or bytes per second. For analog devices, the bandwidth is expressed in cycles per second, or Hertz (Hz). "

So you could say that gigabit ethernet has 1Gb/s of bandwidth or 1Gb/s of throughput and still be correct. However there are some older guys out there still who don't totally agree with the definition being added and may even say something about you being wrong.

This is true for transmission initiation, after that it's all about how much bandwidth can you actually use (is the transfer speed limited by server's available bandwidth for one user at that time or is it limited by user's bandwidth)

Yeah, I'm assuming you're not saturating your network prior to testing it, but even still the definition still stands. If you're saturating your network and you do a ping, your latency will properly reflect the added latency from the used bandwidth. Your modem can only send so many packets at once and a ping will properly describe the respond time for any packet at any particular instant.

Regardless of the load though, latency still describes the response time. The fact that a loaded modem will respond less quickly means nothing. Latency is still the measurement of the response time of a packet switched network. How much bandwidth you use might impact it, but it doesn't determine it considering packet shaping will introduce latency once you've reached your bandwidth cap and most ISPs will shape your traffic.

This is all splitting hairs though.

The real thing to take away is that latency is the measurement of "how fast".

This. Your best parallel would be satellite Internet Service. They can have decent bandwidth of up to 3.0 MB/s but have terrible speed. It takes about 6.2 seconds for transmission from the ground to reach a satellite and back. Once you add in time it takes for signal to reach the transmitter, convert the signal, received, decode, etc. It ends up giving you a latency averaging around 850 ms. Imaging trying to play an FPS with a latency of 850 ms.

In some places this has gotten better. I've seen people with satellite internet with response times closer to 350-400ms. I think this is getting better, but it really depends how how it's setup. The "satellite" might actually be a station on top of a mountain instead of a real satellite which would cut down on the distance the signal has to travel.

Speed is a misnomer in computing because it is rate at which electrons/photons travel and are processed. Bandwidth is given as bytes per second (Hz). Even though the bandwidth may exist doesn't necessarily mean it is used thanks to the fragmentary nature of network packets.

To say a certain network is "fast," implying a speed, is to mean a network has "high bandwidth." To say a certain network is "slow" is to mean a network has "low bandwidth." In both cases, this is always relative to the network load. For a single packet, a 56K network could be as "fast" as a gigabit network because the packets arrive at about the same time. On the other hand, if you're sending a billion packets, 56K will quickly be called "slow" because it doesn't have near the bandwidth of the gigabit network.

Less all the switches and hubs it hit on the way and less the fact that communications are rarely in a straight line (I figure that's why you did 100 instead of 50). Basically that tells you an average speed of the path the packets took. Needless to say, it doesn't take long for a packet to circumnavigate the Earth because it is mostly on ridiculously high bandwidth fiber optic cables. Sending a message to Pluto, on the other hand, would take many minutes--or even using a satellite.

Less all the switches and hubs it hit on the way and less the fact that communications are rarely in a straight line (I figure that's why you did 100 instead of 50). Basically that tells you an average speed of the path the packets took. Needless to say, it doesn't take long for a packet to circumnavigate the Earth because it is mostly on ridiculously high bandwidth fiber optic cables. Sending a message to Pluto, on the other hand, would take many minutes--or even using a satellite.

Actually I did 100 instead of 50 because pings are a round trip measurement. For the calculations I ignored the fact that the path traveled isn't straight. Due to the way cabling is done the actual distance traveled by each packet could have easily doubled though for sure.

Speed is a misnomer in computing because it is rate at which electrons/photons travel and are processed. Bandwidth is given as bytes per second (Hz). Even though the bandwidth may exist doesn't necessarily mean it is used thanks to the fragmentary nature of network packets.

To say a certain network is "fast," implying a speed, is to mean a network has "high bandwidth." To say a certain network is "slow" is to mean a network has "low bandwidth." In both cases, this is always relative to the network load. For a single packet, a 56K network could be as "fast" as a gigabit network because the packets arrive at about the same time. On the other hand, if you're sending a billion packets, 56K will quickly be called "slow" because it doesn't have near the bandwidth of the gigabit network.

Less all the switches and hubs it hit on the way and less the fact that communications are rarely in a straight line (I figure that's why you did 100 instead of 50). Basically that tells you an average speed of the path the packets took. Needless to say, it doesn't take long for a packet to circumnavigate the Earth because it is mostly on ridiculously high bandwidth fiber optic cables. Sending a message to Pluto, on the other hand, would take many minutes--or even using a satellite.

To some extent that is true, it certainly is with all modern forms of physical communication. 56k was slow to respond, even with one packet, but that was just because of the nature of the phone system. So a ping on 56k with a single packet could still take 250-300ms where on cable, dsl, or fiber it could be closer to 80-100ms or lower depending on the location of the server and the quality of the broadband and 56k.

To some extent that is true, it certainly is with all modern forms of physical communication. 56k was slow to respond, even with one packet, but that was just because of the nature of the phone system. So a ping on 56k with a single packet could still take 250-300ms where on cable, dsl, or fiber it could be closer to 80-100ms or lower depending on the location of the server and the quality of the broadband and 56k.

To some extent that is true, it certainly is with all modern forms of physical communication. 56k was slow to respond, even with one packet, but that was just because of the nature of the phone system. So a ping on 56k with a single packet could still take 250-300ms where on cable, dsl, or fiber it could be closer to 80-100ms or lower depending on the location of the server and the quality of the broadband and 56k.

Actually I did 100 instead of 50 because pings are a round trip measurement. For the calculations I ignored the fact that the path traveled isn't straight. Due to the way cabling is done the actual distance traveled by each packet could have easily doubled though for sure.

All DSL and Cable have done is increase the number of signals through better and more complex compression that we were unable to cost effectively perform years ago by moving up to 64, 128, and 256 bit QAM and using lattice compression to bit check the data with minimal overhead in processing to reduce latency.

This is also why some media types are slower, they cannot be compressed or the compression applied to them results in unacceptable artifact.

Plus the advances in termination and load calibrations and much else has been huge in cleanliness of the signal, thus costing fewer parity check bits and larger QAM "words".

OP

You are looking for two separate things, first is your peak theoretical bandwidth. As in , I purchase a slice of a pie that can reach a theoretical peak of lets say 100MBps. However every residential connection in the US is oversold, ISP's know users will rarely use all the bandwidth they buy all the time, so if we follow the 70% bend of the knee rule we can oversell a 1Gb connection by 30% with no real noticeable degradation of service, most seem to follow a 50% or lower bend of the knee rule though.

So lets say you are in a perfect world and your connection is not being throttled by your ISP or oversold. We then find the next weakest link in the chain, usually the router you use, or modem and router assembly. Since they are trying to get by with good service but as cheaply as possible the amount of processing the modem can do is generally limited, and once a connection limit(not physical, but port/IP/service connection aggregation) is reached delay may be added as the packet waits for the CPU to route the packet to the originating client/server.

The hardware limitation of bandwidth in routers is known as backplane bandwidth.

For example a 8 port Gb switch really only needs 4Gb of backplane bandwidth to service all 8 ports at full speed, but depending on MTU this may require more storage and processing than is cost effective in the $29 piece of hardware, so you may only get 2Gb, or 1Gb, meaning that when two other computers are transferring data at high rate your internet may slow down in latency and throughput.

I reviewed a set of power-line Ethernet adapters and they added 3ms to my latency, and were only capable of 30Mbps despite having a higher rating.

If you want to know your absolute "ping" or turnaround times start with your internal network, then start moving out to the local ISP subnet, then to usually their primary node, and then to a webserver. Average a few runs with each and subtract the numbers from your network to the ISP's last node to determine where and if a problem is.