mattgreen: The minimum handover for UFB at a POI is 1Gb though right? One would assume Snap would have buckets of bandwidth relative to customers in the catchment areas so shouldn't be a problem.

You need to think of it in terms of the medium being available or not available for transmit/receive in each split second.

which at 30 megabit is approximately 2500 packets/second for 1500 byte packets.

>>> ((30 * 1000 * 1000) / 8 / 1500)2500

>>> ((30 * 1000 * 1000) / 8 / 1500) * 1460 / 1024

3564

With a transfer speed of up to around 3.5 megabytes/sec.

But how many packets one can send in 1 millisecond is anyone's guess. 2.5? 5?

Normal Linux is now sending 10 packets in initial burst, so at gigabit speeds that is 0.12 msec to send 10 packets on an uncongested gigabit link. but if you can send 2.5 packets in 1 msec, then you need to send the 10 packets over 4 msec ..

Then to be able to evenly disperse packets you need to be cycling through looking for new packets to send at 1000 times/sec or more, for each connection which can have some overhead. Really in order to target against jitter, it has to be more often than that even, and to keep checking to see if there's new stuff in the queue like voip and sending that in advance. Although I'm assuming Chorus take care of that if you tag it on the CIR queue, which means you're more ok buffering for 10 to 20 msec.

mattgreen: I'm no expert on metro/carrier ethernet but won't Snap see a 30Mb or 100Mb "virtual port"? Therefore they won't be able to pump frames onto the "wire" to Chorus faster than that rate.

i think you'll find interpacket delay to be comparable to over 100 megabit for users that use gigabit on all points, and 100 megabit for users that have 100 megabit at one end. a lot of users are using routers with 100 megabit, which is one reason why i believe that 100 megabit may work better.

if you dump the traffic and look at the incoming time on back to back packets you can deduce whether this is or is not the behaviour coming through to the UFB connection. but Snap may implement a workaround soon.

mattgreen: I'm no expert on metro/carrier ethernet but won't Snap see a 30Mb or 100Mb "virtual port"? Therefore they won't be able to pump frames onto the "wire" to Chorus faster than that rate.

i think you'll find interpacket delay to be comparable to over 100 megabit for users that use gigabit on all points, and 100 megabit for users that have 100 megabit at one end. a lot of users are using routers with 100 megabit, which is one reason why i believe that 100 megabit may work better.

if you dump the traffic and look at the incoming time on back to back packets you can deduce whether this is or is not the behaviour coming through to the UFB connection. but Snap may implement a workaround soon.

Just to add to the conversation for others who may not be aware of exactly how things work. Low priority EIR data (ie your 30Mbps headline speed) is queued when in excess of your connection speed, and the high priority CIR data packets are discarded when they exceed the dimensioned speed.

mattgreen: I'm no expert on metro/carrier ethernet but won't Snap see a 30Mb or 100Mb "virtual port"? Therefore they won't be able to pump frames onto the "wire" to Chorus faster than that rate.

i think you'll find interpacket delay to be comparable to over 100 megabit for users that use gigabit on all points, and 100 megabit for users that have 100 megabit at one end. a lot of users are using routers with 100 megabit, which is one reason why i believe that 100 megabit may work better.

if you dump the traffic and look at the incoming time on back to back packets you can deduce whether this is or is not the behaviour coming through to the UFB connection. but Snap may implement a workaround soon.

Just to add to the conversation for others who may not be aware of exactly how things work. Low priority EIR data (ie your 30Mbps headline speed) is queued when in excess of your connection speed, and the high priority CIR data packets are discarded when they exceed the dimensioned speed.

Oh so they are queued? For how much data / packets / duration? If that is the case then why are international speeds screwed? I've noticed in the past that EUBA connections often don't buffer enough data at high connection speeds, but too much at low connection speeds. IE it appears to not be dynamic.

Can anybody confirm the colour of their LAN1 light on their ONT box. Mine has always been orange for some reason. I spoke to the chorus guy's who said that's fine however I have just been reading this article

http://www.nbr.co.nz/opinion/mind-blown-hands-ufb-fibre

And I noticed the LAN1 light is green.

Here is the picture from the article and below that is my ONT box which has the orange LAN1 light. I have been installed for 2 weeks now and it's only ever been orange.

LAN1 on ONT is orange if there is a gigabit connection to your router, otherwise it's green. Probably a moot point for now given there aren't any residential UFB services over 100MBit.

I am on 100/50 UFB with Orcon but not using Orcon Genius (using a MikroTik RB2011). I will post some iperf tests later when I'm home but I've found there is virtually no restriction locally - I can get about 94MBit down/44MBit up using iperf from home to a server I have hosted in Auckland.

if you indeed have gigabit router i'd be curious if you could compare speeds between 100 megabit/gigabit. your router may an option to set link speed, but it not same thing should be able to be tested by setting your computer to 100 megabit.