Main menu

Post navigation

There is another project out there in the ether that I have a hand in providing input for. One of the features that I felt was necessary for it is exporting NetFlow information from traffic the Linux machine handled, to a collector. This is dual-stack traffic, but I have the collector listening on IPv6.

Firstly, I needed something that would gather and export the data, so I found softflowd. My ubuntu server had it in the repo, so a quick apt install got it onto the machine easily enough. You need to edit /etc/default/softflowd and set what interface(s) you want it capturing & generating flow data from, and what options to feed to the daemon, like what server:port to export that data to:

INTERFACE="eth#"
OPTIONS="-v 9 -n [x:x:x:x::x]:9995"

Fill in the correct interface name you want to gather data from. The -v 9 option tells it to use Netflow v9, which has IPv6 support. The -n option is used for specifying the collector machine’s IP and port, so fill in for the correct IPv6 address of that collector. And that is the format for specifying an IPv6 host running a collector, like nfcapd. Then you can fire up the softflowd daemon, and you should start getting data sent to the collector:

Not that I go out of my way to endorse one project/product over another, there is one that I have recently fallen in love with for streaming my media. Especially when it can use IPv6! So I needed a cross-platform solution for my streaming media needs. I was originally using XBMC, but only had it tied into the TV. I use several other computers and devices, in other locations outside of the house. So I read up on Plex. Got it installed with little to no effort, and could readily access my content where ever I was. I even tested this on my last trip to London, UK and was able to get a decent 1.2mbit/s stream from my house. Only issue was that it wasn’t using IPv6 in the app or accessing via plex.tv (server on that site only comes up with an IPv4 address).

So poking around I discovered 2 things: 1) I could access the Plex server directly at the IP/hostname of the server, and 2) there was a checkbox to enable IPv6!!

Simply browse to your Plex server, click on the settings icon (screwdriver + wrench), select Server, click on Networking and then “Show Advanced”. You’ll see the checkbox at the top, click it and save settings. Now you should be good to go! You can either access the server directly using the literal address inside []:32000:/web/ or the hostname:32000/web/

I still need to check if apps on Android are accessing over IPv6 or not, but for now I know a browser directly connecting to the Plex server does. A quick netstat shows us it is listening:

As part of a request at work to figure out IPv4 addresses of devices on a network where broadcast pings don’t work, and no administrative access to the switches/routers, I took a look at solving this with IPv6. We know that you can ping6 the all-nodes multicast address, and get DUP! replies from IPv6 enabled hosts on that LAN segment. These will typically be link-local addresses, from which you can determine a MAC address. How to resolve that MAC address on a client host and not the router/switch, I was thinking reverse ARP or something, but support for that wasn’t present in my Ubuntu 13.10 kernel on the main machine I was working with. I started looking around for other options using IPv6 and found RFC4620, Section 6.4.

The gist of it is that you send an ICMPv6 Type 139 packet to an IPv6 address, asking if it has any IPv4 addresses configured either on that interface the target address is on, or any interfaces on the machine itself. And this is why this is disabled by default on hosts, and *IF* you insist on filtering ICMP6 Types, definitely make certain this is one of them. It works if you can enable it. I found a Git repo of ninfod, a userspace compiled binary, that needs to run as root for raw socket access. Compiled, ran with the flag to allow replies to Globally reachable hosts (I was testing between 2 remote sites, as my older VMs and their ping6 command didn’t support the -N flag), and gave it a test:

So while this works as intended, it makes absolute sense why it needs to be disabled by default, and even potentially filtered out completely. But at least it was fun to play with IPv6 to determine IPv4 information of remote hosts.

There is a lot of news surrounding Net Neutrality, and potential repercussions of decisions made by courts, and some players out there that want to grab as much cash as they can, and claim it is in the best interest of their customers.

Netflix is just an example people love citing because it is bandwidth intensive, yet is not the entire story itself. Take a moment and understand how the Internet is pieced together. The Internet is a mass of interconnections between networks. These interconnections happen basically 1 of 3 ways:

transit: network A pays network B to reach every other network that isn’t A or B. Good networks usually get multiple transits for failover, and/or alternate paths to those other networks. You can buy multiple ports for bonding to increase capacity, etc. Average transit price without a Service Level Agreement (SLA, guaranteed connectivity or you can yell at us a lot and we credit you) is around $1-2/mbit, and with a SLA can hit upwards of $10/mbit. These are current avg. prices when buying 10G at a time of connectivity/capacity right now.

peering (settlement free, or “free”): Network A spends a bunch of money to get into popular (and less popular but regionally situated) Internet Exchange Points (IXP), and advertises its customers to other participants at no charge, usually over a series of network switches operated by that IXP. These networks pay for a number of switch ports and those ports’ capacity. This can also happen as a “private interconnect” where, at this mutual IXP, the networks can pay the monthly cost of running a fiber between them and exchange their customers’ routes that way. This means if you have 1G on the public exchange, but want 10G to a specific network, you pay the cheaper of either upgrading your exchange port to handle more capacity, or the private interconnect. The private interconnect is generally cheaper, as a single exchange 10G port can run around $7k/mo, versus a $350/mo for a fiber connection capable of 1/10/40/100G capacities (depending on either sides’ hardware capabilities and negotiated speed). You can also bond multiple ports to have loads more capacity, and still be cheaper than the cost of that single flat rate port on the exchange.

paid peering: Network A pays network B to only reach B’s customers/routes. Comcast, AT&T, etc., these are the players that usually play this game. “Oh you want to reach our users over a dedicated line, pay us $x as well as the cost of the connection.”

Netflix, since it appears to be everyone’s favorite example, has paid for at least 2 of the 3 (because almost no one in the networking world divulges if they have done the “paid peering” option, but it can be assumed in some cases these content companies do). So lets look at Netflix’s interconnection map. The thick lines are where reaching Netflix is most common, and assumed their paid transits. Lots of people traverse Level3 to reach Netflix, so their reliance on that would point to a paid transit. Thinner lines are the other ISPs that do not have as many networks behind them like Level3 might, so users don’t traverse them as much. We could assume those are peering connections at IXPs. Netflix has also deployed caching hardware at locations close to the Comcasts and AT&Ts that talk with players like Level3, to reduce latency and give you a better viewing experience.

So in order for Netflix to provide you with their service, they have now bought: transit, peering capacity, deployed de-centralized hardware to bring their content closer to you, and quite possibly already pays the Comcasts, etc. the paid peering costs. They already pay to put their content on the Internet, the whole Internet, and nothing but the Internet (with the usual regional restrictions to enforce copyright blah blah). You pay $60/mo (or more, or less, whatever) to reach that WHOLE INTERNET, and an additional $7 or whatever for the Netflix service.

But wait, now if Netflix doesn’t cough up more money, to not only your provider but any provider that decides to go this route, the quality of their service could be degraded ARTIFICIALLY. Alllll that capacity is still there, and never went away until the place you pay $60/mo to decided they want more money, and from services/destinations you pay them to deliver to you already. So Netflix eats the cost of doing business and you maybe now look at $11/mo because Netflix is out for profit too, without a doubt.

Now the kicker. Take away the name Netflix, and replace it with some other service you like. Actually, make it your project/next-great-idea. Hope you have the cash to pay to play.

That isn’t what the Internet should be, and for other parts of the world it won’t be. But with the US being a decent driving force behind changes that make their way throughout the Internet, it very well could be. Setting bad precedents here in the US, make it so others might follow suit elsewhere.

Swapped from AT&T to T-Mobile in order to take advantage of their 4G/LTE IPv6 network. Since I dogfood IPv6 every chance I get, and the cost to swap saved me a whopping $0.50, I moved forward with it. I find that if I set their EPC.TMOBILE.COM APN to IPv4/IPv6, I don’t really see much in the way of dual-stack actually working on the phone. So I set it from the default IPv4 to just IPv6, and that got it working with native IPv6 and using CLAT+NAT64/DNS64 for IPv4 sites. Screenshot from my Galaxy Nexus running 4.3:

So I got to do some honest IPv6 related work at the job the last 2 weeks. One task was to verify we had IPv6 working on the load balancers to hosts behind it. I was a bit wary of the state of IPv6 security on these A10 LBs, so I opted to keep the globally routed IPv6 space on the LB’s uplink interface, and the VIPs. And behind the scenes, use ULA.

Step 1: I generated a /48 of ULA for the location, and assigned a /64 for use on the VLAN that the inside interface of the LB sits on with the servers themselves.

Step 2: Configure ::1/64 on the LB inside vlan interface, and ::2/64 on a server, and verified that they could reach each other.

Step 3: I installed lighttpd on the server and configured it to listen on the ULA address.

Step 4: From my ARIN allocation, I have a /64 reserved for configuring /126s on device links to the router, so I configured it on the LB’s dedicated interface on the router. Using ::1/126 on the router; ::2/126 on the LB’s interface; ::3/126 as the VIP.

Step 5: Create on the LB an “IPv6 NAT Pool”, which is really a set of IPs that will act as source IP when talking to the webserver from the LB. I used ::3 through ::ff/64 of the same ULA space. The A10 LB only allows you to create a pool of 1000 IPs, so keep that in mind.

Step 6: Next you create the “Server” entry which is a description referencing an IPv6 address, in this case the ULA address of the web server. You also specify what IP services it will host that the LB can healthcheck, so I only set TCP 80.

Step 7: Then a “Service Group” needs to be created, and this is where you set what kind of LB algorithm and which servers will be used.

Step 8: Now a “Virtual Service” is defined that will tie in what service is forwarded to servers behind the LB, in this case HTTP on port 80, as well as what “NAT Pool” to use.

Step 9: Finally we create the “Virtual Server” (or VIP) with what globally routed IPv6 address you want to use, and what host/service will be used internally.

Now the above is just for getting IPv6 working through the unit. You can obviously attain dual-stack status by doing the same using IPv4. As well as actual load balancing when creating multiple “Server” entries and adding them to the “Service Group”.

Noticed this weekend that I couldn’t respond to emails on my personal hosted domains. I thought at first first they changed my PD prefix, but it was up to date in Postfix. Tried submission port and it worked just fine. So looks like Comcast finally caught up with “feature parity” in disallowing outbound SMTP connections on TCP 25.

This will be about already having nfsen/nfdump configured, and are looking to just make a flow profile to graph IPv6 traffic from your routers. If you are looking to get nfsen iniitially configured, definitely follow their instructions on their site.

Say you have an sFlow capable router like…picking one totally not at random…..a Brocade XMR or MLX(e), and you want some basic flow data, especially IPv6. Depending on how many routers you are going to collect flow data from, will determine how beefy of a machine you will need. I know that at $lastjob, it was a hefty CPU (and definitely more than 1), tons of RAM, and hardware RAID. Right now, I’m using dual quad-core Xeon, tons of RAM and a small hardware RAID, but this machine serves many purposes. Right now I’m also only polling 4 MLX routers.

Go ahead and access your nfsen website, and on the Profiles pulldown, select “New Profile …”. In the creation dialog, give the profile whatever title you like; I went with the generic title of “IPv6”. If you want to add it to a group or make one for it, do as you please. I left that alone so I’d have it as a readily available profile to view. Select all the sources, or at least the ones you KNOW have IPv6 configured on as well as possible traffic (hosts, peers, transits, etc.). In the filter option, simply put: inet6

Save the profile, wait a while for data to get newly collected, and your graphs should start populating with delicious IPv6 flow information.