D/Line Radiohttps://www.dlineradio.co.uk
Wearing the many technical hats of radio.Mon, 14 Aug 2017 11:05:18 +0000en-UShourly1https://wordpress.org/?v=4.8.1Single vs Multi Channel Architectures for WiFihttps://www.dlineradio.co.uk/articles/single-vs-multi-channel-architectures-for-wifi/
https://www.dlineradio.co.uk/articles/single-vs-multi-channel-architectures-for-wifi/#respondMon, 14 Aug 2017 11:05:18 +0000https://www.dlineradio.co.uk/?p=598Almost every area of IT has its holy wars. Whether it’s processors and AMD vs Intel vs ARM, GPU with AMD vs NVidia or even operating systems with Linux, Windows and even BSD in competition. In my own experience, you’re best using the correct tool for the job. Sometimes one OS, GPU or processor is a better fit for the job than another.

When it comes to wireless network architectures, there are two RF models out there. Single channel and multi-channel. This article is where I don the asbestos suit and enter into the area trying to be as objective as possible.

Either way, the multi-channel architecture is the one you’re most likely familiar with. The area you need to cover is broken into a number of different cells, one per AP. The number and transmission power of the cells depends on the coverage and density you’re planning for. Generally, for a higher density design, you’ll see more APs, within reason.

Each cell is on a different channel to its neighbours. The main reason for doing this is that every client wishing to use the wireless network on a given area and channel will need to share airtime – only one can transmit at a time. By spreading clients among channels, the chance of collisions reduces. The more spectrum you can do this with, the lower your contention ratio should be.

With only three channels to play with, it’s not long before you’re re-using channels. Hopefully without overlap, but even in that case, you will encounter co-channel interference. That has the effect of raising the noise floor, dropping the signal to noise ratio, dropping throughput and increasing both transmission time and collisions.

In the UK, you can use channel 13 as well, technically allowing for four non-overlapping 20MHz channels. The catch in doing so is that you’ll encounter complaints from international users very quickly. Turns out that telling people from the USA and China to re-configure their MacBook for use in the UK doesn’t go down so well…

Image from Wikipedia.

While things are better on 5GHz, there are a few gotchas regarding the extra spectrum you have available. What’s called UNII-3 spectrum in the standards is actually licensed in the UK. That effectively writes off a number of usable channels.

At the bottom end of 5GHz, the first four 20MHz channels can be used with similar caveats to the 2.4GHz spectrum. Anything after that (the largest chunk of freely available spectrum) requires the use of DFS. Not the furniture store but Dynamic Frequency Selection, which acts as a RADAR avoidance feature.

While there is a possibility you may encounter the odd rogue DFS event that will cause an AP to change channel or at least go quiet, I’d recommend the use of DFS channels anywhere you’re not constantly being painted by RADAR.

On that note, let’s go back to the original idea of this multi-channel arrangement – clients will be spread across the different channels as they move physically through the area in question. This means clients will need to roam between APs, which can be sped up a bit through features such as OKC.

In a high density environment, you can operate tight, low powered cells to spread the load among APs. Remember, it’s a trade-off between co-channel interference driving up the noise floor and number of clients on a channel burning airtime.

The more clients on-channel, the less time each client can have. This is impacted even more when low data rates are in play. It takes longer for the client in question to transmit the data. That get even worse when you consider that beacons are transmitted at the lowest basic rate, burning even more airtime. If you ever needed a reason to kill off those old data rates, admittedly at the cost of some perceived coverage, there’s a reason.

(On a side note, perceived coverage is a problem we see in FM broadcast. Cranking the audio processing can increase the perceived coverage area, at the cost of audio quality or even stereo service in some instances).

Taking that into account, you can see why more, smaller cells providing more spectrum to clients can be a good thing. It’s only possible to take it so far though, as you still have the co-channel interference problem to worry about.

Compare this approach to a single channel architecture. Here every AP is transmitting on the same channel, with the same BSSID. The client has no knowledge that they are roaming between APs as they move around the area.

The cleverness in this comes in the scheduling of clients on channel. When it comes to getting a chance to transmit, there are some gaps between frames they can’t use which exist to protect the previously transmitted frame. Once these are out of the way, the contention window comes into play. Clients are to choose a random offset of time in this window as a point at which to start transmitting. If another radio jumps in ahead of you, they get the slot to transmit their frame. For every slot missed, the radio should be dropping its random number until it approaches zero.

A bit of clever manipulation of the window size by the AP can result in a lot of radios sharing airtime effectively. This is enhanced further by the multiple APs that are all claiming to be the same BSSID. Clients in different physical locations, separated enough that they can be received clearly on different APs gives you the ability to have multiple clients transmitting at once.

A further benefit to clients is the ability to move around a space without needing to roam between APs. Every AP is operating in such a way it pretends to be one.

This means the RF design element varies a bit from your usual small cells on different channels. So long as you got the spacing about right and had the radios to cope with the expected density, everything should be good.

“Should”, it’s a good word. I’ve seen real world implementations where the power is cranked up to gain coverage. Combined with allowing 1 Mbps data rates, it resulted in clients outside the building latching on and burning airtime. In high density locations, you could see incredibly high re-transmit rates and poor throughput.

Don’t get me wrong, there is a place where single channel architectures shine – VOIP in office spaces. The lack of roaming from the client means that there’s less chance of a call being dropped. OKC and friends are making this less of an issue in multi-channel architectures.

Once the density starts cranking up to hundreds of clients in a small space, you start to run into real problems with clients fighting for airtime. Even with multiple APs on the same channel covering an area, you don’t necessarily get the separation needed to be able to receive the frames arriving at the same time clearly.

In this scenario, vendors recommend using “channel layering” or as you might otherwise call it, a multi-channel architecture. This makes sense that after a certain point, you just need as much spectrum as possible to share load across. But it also feels like the key benefits of single channel architectures has disappeared when doing this.

Sadly, real world experience tells me this doesn’t always work. Clients will often latch onto a single channel and have no incentive to ever roam. The RSSI never drops and the client will just keep dropping the data rate until it gets the re-transmit count down. If you’re unlucky, most of your clients will have settled on one channel in a small area, resulting in a high collision rate and a poor experience for all.

802.11k should be able to assist with this. However, it relies on client support, which can be spotty a best.

It gets even worse if you’re transitioning between a single channel deployment to a multi-channel deployment. The high TX powers used will result in clients latching on to the single channel system and having no incentive to roam onto the multi-channel system as they move between areas.

On a more positive note, it’s very easily to deliver incredible client throughput rates in a single channel architecture. You can use 80 MHz and even 160 MHz channel widths, combined with high QAM levels and without worrying about channel re-use. You do need no neighbours anywhere near you to pull this off in the real world though. If they appear on any of your secondary channels, one of you is going to have a poor experience.

While there’s a lot of issues in the single channel system, it’s not all plain sailing in the multi-channel world either. As you’ve probably picked up by now, we’re transitioning from a single channel model to a multi-channel model. One of the key factors I’ve put in the design phase is to build for density first. At a low level, that means 20 MHz channels over 40 or 80, resulting in a lower top throughput rate, but more spectrum available to spread clients across.

We’re also clamping transmission power and relocating APs. In the single channel deployment, APs were often placed in corridors with a clear view of each other. Not an issue as transmission power was cranked up. Doing the same with a system operating automated radio management will result in shrunken cells and a poor experience in office spaces.

While all the planning, modelling and surveying is taking time and slowing things down, it’s proving to be worth the effort. User feedback has improved considerably in the buildings we’ve completed so far. That doesn’t take into account that the back-end is being re-engineered as well to be more robust, reliable and flexible. Changing the RADIUS servers has done wonders for the support side of things and reliability in off-site eduroam authentications. There’s more to come with firewalls, backhaul and monitoring/analysis.

We’ve still got a way to go with the back-end but the change so far, simply moving to a well planned multi-channel architecture with a knowledgable team behind it has done wonders so far. Even if it has involved buildings that had been constructed less than a year ago and deployed with the single channel system.

One thing I don’t want you to come away from this article thinking is that there’s no place for single channel architectures. In the right scenario (relatively low density, possibly VOIP in an office space), it’ll do the job. For a large campus network seeing over 30k clients on an average day, we need all the spectrum we can get to keep those clients talking.

]]>https://www.dlineradio.co.uk/articles/single-vs-multi-channel-architectures-for-wifi/feed/0L3 MPLS VPN on Brocade Ironwarehttps://www.dlineradio.co.uk/articles/l3-mpls-vpn-on-brocade-ironware/
https://www.dlineradio.co.uk/articles/l3-mpls-vpn-on-brocade-ironware/#respondMon, 31 Jul 2017 11:28:57 +0000https://www.dlineradio.co.uk/?p=582This might seem a bit of an odd post to be making. Not just because the MLXe router we’re using will probably be painted a very fetching shade of purple in future editions, but also due to a lack of spare hardware, I had to pair it up with a Mikrotik hEX.

’twas a bit of a little and large situation.

In all seriousness, our aim is to get the two core routers to build an L3 VPN tunnel over MPLS. We’ll be using BGP to advertise the ends of the tunnel, allowing us to very easily expand out.

In short, our two routers will be provider edge routers, the MLXe towards the bottom of the diagram and the hEX futher up.

Throughout this we will be concentrating on the MLXe side of things. The documentation on the Mikrotik wiki has enough great examples to get through the process.

The starting point is two routers with loopback interfaces sharing routes via. OSPF. Everything is built on top fo the basic L3 network. The required configuration to make this happen looks a little like this:

Assuming all is well, you should be able to ping between the two loopback interfaces. If not, debug the problem before going any further, we need a working base to start with.

We can now enable MPLS and use LDP to distribute labels across the network. Note that you’ll need a line card with support for MPLS installed for this to work.

router mpls
mpls-interface ve 121
ldp-enable

At that point not much will appear to have changed, even though a lot has. You should still be able to ping. However, if you run the show mpls ldp database command, you’ll see that the next hop maps onto a label. That’s the label that will be attached as the packed is encapsulated between the routers.

In order to keep the VPN we’re building separate from the underlying network and any other VPNs we may implement later, we’ll be placing it into a virtual routing and forwarding (VRF) instance.

There’s a lot going on here so let’s break it down. To start with the name for the VRF instance is completely arbitrary. Pick an appropriate name for your own implementation.

One of the more interesting bits is rd 65330:2. That’s the route distinguisher we’re attaching to the VRF. It’s merely a unique ID we’ll be using throughout the infrastructure. In a larger network each customer would one to separate their routes.

The format for the route descriptor is usually [AS Number]:[Number]. Throughout this example, we’ll be using the AS number 65530. You can use your real AS number if you have one. Otherwise, pick a private number.

Either way, in our smaller campus network, the route descriptor is used to divide network roles. The AS number will be consistent but the route descriptor will be specific to the VPN.

The route target inbound and outbound settings is where we actually make use of this. The route descriptor we use for a specific circuit or customer could be different at both ends. Here, we can set what route descriptor will be used for when we export or import routes. In our case BGP is the routing protocol of choice.

Beyond this, we simply enable IPv4 and IPv6 for the VRF in question. The latter configuration is an example of how to get a VLAN and virtual routing interface into a specific VRF.

With the local routing instance now ready to go, we can look at configuring BGP to pass the routes required for the VPN to work.

Here we set the router’s Autonomous System (AS) number. The AS model is a core part of the BGP infrastructure. In our model, we’ll be using the same AS number on all of the routers.

The next-hop-mpls line tells the router to prefer MPLS over IP routing for the next hop. Finally, we set our neighbour (or peer) up on 10.0.0.2 with the same AS number and updating from the loopback interface. If you’re talking to routers from other vendors, you may need to enable multi-hop for connections between peers.

And now for where the real magic happens. Here, we enable VPNv4/v6 to advertise our tunnels. The extended communities are enabled for each peer.

The requirement to configure for each peer is a bit of a sticking point in BGP. In a small setup like this (two routers), we can manage the peering. When the number of routers expands out, a full mesh needs to be maintained. The use of route reflectors allows you to move away from this requirement and should be considered in larger designs.

The final stanza is where we actually start to advertise the routes for our VPN. It’s also the slightly confusing element.

We enable IPv4 routing for BGP on the VRF we created earlier. Again, we must specify the neighbours/peers we’ll be tunnelling between.

The redistribute connected line is something you probably won’t see in a production configuration. It redistributes routes for all connected interfaces in that VRF into BGP.

In most cases, you’ll either see the redistribution of OSPF routes into the BGP VPN or even another instance of BGP on a different AS number. If you were on the customer side of this arrangement, you’d have to specifically configure BGP to allow routes to pass for your own AS number through another AS.

Either way, that’s all it takes to configure L3 MPLS VPNs using BGP on Brocade Netiron hardware. Next steps on my own journey are to enable dynamic routing from VRFs at the distribution layer, experiments with RSVP for guaranteeing bandwidth and demoing 802.1x on the wire. None of it’s new stuff to me but it’s an interesting exercise bringing it all together.

]]>https://www.dlineradio.co.uk/articles/l3-mpls-vpn-on-brocade-ironware/feed/0Moving Zones Between Views in Infobloxhttps://www.dlineradio.co.uk/articles/moving-zones-between-views-in-infoblox/
https://www.dlineradio.co.uk/articles/moving-zones-between-views-in-infoblox/#respondFri, 23 Jun 2017 12:56:42 +0000https://www.dlineradio.co.uk/?p=570It turns out that moving zones between views in Infoblox is a surprisingly hard thing to do. The challenge ended up on my desk after we made the decision to operate a separate external view for a selection of our domains. At that point, we need to move a selection of the domains from the internal view to the new external view.

One way of doing this is to export each zone, one at a time as a CSV file and import it back in. Unfortunately, we found it tended to run into errors doing this and would have been incredibly tedious to do for the number of zones involved.

The approach I took in the end was to use the web API to get all the records we need out of the zones, create new zones and duplicate the records in the new view. This isn’t my first time making use of the web API in Infoblox but it did (thankfully) prove powerful enough to do the job.

You will need to adjust a few things for your own use. In most cases it’s just a matter of adjusting the settings at the top of the script.

However, the logic we’ve used here is to move zones that don’t already exist in the destination view. The idea behind this we’d already created the external views that needed to be split.

Either way, hopefully this script is of some use to you and saves a bit of pain hand cranking the move.

]]>https://www.dlineradio.co.uk/articles/moving-zones-between-views-in-infoblox/feed/0Compression, Loudness and Coverage – A Trade Offhttps://www.dlineradio.co.uk/articles/compression-loudness-and-coverage-a-trade-off/
https://www.dlineradio.co.uk/articles/compression-loudness-and-coverage-a-trade-off/#respondMon, 19 Jun 2017 12:36:56 +0000https://www.dlineradio.co.uk/?p=558It was always touted as the thing you wanted to be – the loudest on the dial. It was even said on occasion that it could help improve your coverage in rough patches. This was very much taken to heart by the small scale commercial station I was on the launch line-up of and has since been lost to become a relay of two different stations.

While I wasn’t working as a techy for said station, it’s a topic that’s come up a number of times over my career. With a community station on the south coast, we improved their coverage area drastically by sorting out the processing. No turning up the wick required, simply using your full +-75kHz and developing a good preset on the processor made the magic happen.

If we really wanted to take it further we could have broadcast in mono. A rather old hat option nowadays but still something you occasionally see in areas where terrain is a real limiter.

A more modern solution would be Single Side Band (SSB) stereo. In a normal FM multiplex signal, the L-R element is carried as two carriers centred on 38kHz. With SSB, only one of these side bands is used. Well, that’s the theoretical version. In reality, lower frequencies are carried on both side bands.

While SSB stereo is on my to-do list to experiment with, there is a long running experiment I have been involved with at a community radio station. When I arrived the processing was mediocre and heavily compressed. You can make out what everyone’s saying but the music just doesn’t “breathe”. They’d nailed that pinched nose muffled sound.

On the plus side it wasn’t the loudest on the dial. Just a shame it’s a processor you struggle to get a good sound out of and may be due for replacement.

Backing off the limiter did make a bit of a pleasant difference to the sound. The overall level was upped a bit to match competing stations and some mid removed to make speech more legible.

The result – a much more pleasant sound but cue a few complaints coming in. “The station drops out around [AREA], it didn’t use to”. “My car switches to [COMPETITOR] around [AREA]”.

As these complaints come in, I hop into my car, parked on a fringe coverage area of the station – it’s coming through as good as ever. What’s going on?

Well, it turns out simply being louder isn’t everything. It makes a difference if you’re operating significantly below the +-75kHz you’re allowed to operate with. The key factors I found were how heavy you were compressing/limiting and going heavy on low/mid frequency audio.

While it doesn’t make for the most pleasant listening experience (ear bleed ahoy!) it does give an impression of better coverage. In reality, not changing the RF properties of the transmitter means that there’s no change in actual coverage. 25W does just as well as it ever does.

However, the perceived coverage improvement is a big deal. Especially as the station in question suffers from co-channel interference. Dead patches often see radios switch over to this other service. Just a shame it has to come at the cost of a good station sound.

]]>https://www.dlineradio.co.uk/articles/compression-loudness-and-coverage-a-trade-off/feed/0Modern Protocol Problems with an Old Firewallhttps://www.dlineradio.co.uk/articles/modern-protocol-problems-with-an-old-firewall/
https://www.dlineradio.co.uk/articles/modern-protocol-problems-with-an-old-firewall/#respondFri, 28 Apr 2017 10:25:06 +0000https://www.dlineradio.co.uk/?p=555Legacy systems are part of any mature technical environment. While I was a combination of both impressed and shocked when recently presented with a SunOS 5.7 box (OS dated 1998!) I had to assist with an issue on, there are some other places where unfortunately legacy technology can’t quite keep up.

Take the Juniper NetScreen-5000 series of firewalls. While they’re still very capable firewalls, the firmware running on them is over a decade old and they’re very much end of life now.

Thankfully, it’s been replaced by something more modern but going through the process did bring some curious issues to light. The first of which was packets being dropped at low-ish throughput levels. The “in overrun” counter was slowly creeping up on all active interfaces. This counter logs packets that were received error free but the software didn’t have the resources to process. It basically means a buffer isn’t draining fast enough.

Usually you can tie this issue to CPU or memory resources running a bit too tight. In this case, we were seeing an idling CPU and very little memory being used. Similar to an issue we (and a few other organisations) have seen with Meru wireless controllers, which effectively capped the throughput on said controllers in a similar way to saturating a link would. Catch was, it happened at under 500Mbps, well below the expected capacity of the hardware.

Anyway, back to the firewalls. Admittedly we never did find a good reason for the failure. Even the session count was well below the expected capacity of the hardware and software. But what we did see was a huge number of UDP flood attacks being logged.

Specifically, these packets were being seen from Google addresses on 443. If you’ve not twigged yet, UDP traffic on 443 to/from Google is completely normal and expected. Especially if your endpoints are running Chrome.

The QUIC protocol is intended to act as an SSL over TCP replacement with a goal of improving performance. Chrome uses it where possible as an alternative to SSL over TCP for web traffic. All those YouTube cat videos come down the pipe as QUIC if you allow it.

And that’s where our firewall gets confused. The software on it (never mind the hardware) was written before the protocol ever existed. Seeing all those UDP packets flying at it causes it to trigger the alarm and fill the log. It also took defensive action and dropped packets (that was logged in another counter).

While it may not have been the root cause of the failures on the box (we suspect resource exhaustion), it did pique my interest.

]]>https://www.dlineradio.co.uk/articles/modern-protocol-problems-with-an-old-firewall/feed/0Reading Cart Chunk with PowerShellhttps://www.dlineradio.co.uk/articles/reading-cart-chunk-with-powershell/
https://www.dlineradio.co.uk/articles/reading-cart-chunk-with-powershell/#respondWed, 19 Apr 2017 10:28:03 +0000https://www.dlineradio.co.uk/?p=547Look at the audio files in any professional radio playout system. They’ll likely be linear WAVE files with the associated metadata stored in the cart chunk format. With this information it’s usually enough to successfully share audio between stations and systems (there are some issues around time markers but they can be worked round).

With that in mind, a recent project required me to read the cart chunk data from an existing playout system using PowerShell. It might not seem an obvious option for this but as the ubiquitous scripting language on the Windows platform, it should be possible and require a lot less work than writing and maintaining a full .NET application.

Before we crack into the code, it’s worth taking a look at how a WAVE file is broken down. At the top level it’s a single “chunk” called the RIFF chunk. The header of this chunk is made up of two 4 byte values – the tag (“RIFF” in this case) and the length of the chunk content. As this is the top level chunk, the length is the length of the rest of the file.

Within this top level chunk, you’ll see a number of smaller chunks. Some are required (e.g. data and fmt), others not so much (e.g. cart and bext). These chunks all use the same header format as the top level chunk. That means it should be simple enough to skip through the file looking for the chunk you want rather than reading the whole file into memory.

If you want a bit more information about the technical details of how the chunks are formatted, check out this site. The fmt and data chunks are of most interest if you’re planning to read or write the audio data from the files.

That’s the entire thing ready to go. Admittedly it only reads the title and artist fields but it wouldn’t take much to extend it into any of the other fields you need.

Either way, let’s take a closer look. One of the first lines to jump out would be:

$encoder = [System.Text.Encoding]::UTF7

The WAVE format (and cart chunk) is old enough that it’s specified the fields should be ASCII format. As that has no support for accented characters, you’ll often see UTF7 encoding used instead. This is one of those real world vs. specification things.

A little further down you’ll see we read the file in as a binary and look for the RIFF tag we talked about earlier.

Assuming we’re all good, we shift past the initial header and enter the main loop. This loop is constructed so that we check every chunk in the file until we see the one we want. The location of the cart chunk in a WAVE file is not explicitly defined. You’ll find that playout systems vary between placing it ahead of and after the audio data.

This is one of the reasons skipping through the file rather than reading it in wholesale is a nicer approach. While we’re on the topic of skipping through the file, we calculate the length of our next skip using the following code:

From this bigger buffer we can now read in the cart chunk contents. In this example, we’re only extracting the artist and title which we then present back to the user as an object. It’s here that you’ll want to add any code of processing further fields.

And that’s all you need to read cart chunk in PowerShell. Turns out it’s much simpler than I thought it would be.

]]>https://www.dlineradio.co.uk/articles/reading-cart-chunk-with-powershell/feed/0NATing (well, PATing) Specific Address Ranges on a Fortigatehttps://www.dlineradio.co.uk/articles/nating-well-pating-specific-address-ranges-on-a-fortigate/
https://www.dlineradio.co.uk/articles/nating-well-pating-specific-address-ranges-on-a-fortigate/#respondSun, 02 Apr 2017 14:25:54 +0000https://www.dlineradio.co.uk/?p=522It’s a simple little thing I struggled to find any real documentation on but can entirely be done on Fortigate firewalls. The challenge was simple – take a network of made of globally routable IPv4 addresses and private address space, hold it behind and perimeter firewall and can we selectively NAT/PAT traffic from the private address space.

The trick involves firewall rules. You start by creating an address pool you will be translating and overloading in to. For my tests I simply selected a single IP address for each range I’d be NATing.

Now you need to create a new rule (or policy in the Fortinet world) running from your internal to your external zone. I’m assuming a rather simplified network arrangement – adjust as required.

Set the source address as the private range you want to translate and overload for. The destination address and service will likely be “Any”. Turn NAT on (toward the bottom of the page) and select “Use Dynamic IP Pool”. Select the address pool we created earlier.

Now save the rule and your NAT/PAT should be working. As a further step, I’d highly recommend you have an inbound rule in place dropping traffic from sources that matches your private address space. You could even go a stage further and look into Bogon block lists should your ISP not offer such a service.

]]>https://www.dlineradio.co.uk/articles/nating-well-pating-specific-address-ranges-on-a-fortigate/feed/0First Steps into OpenDaylight with Brocadehttps://www.dlineradio.co.uk/articles/first-steps-into-opendaylight-with-brocade/
https://www.dlineradio.co.uk/articles/first-steps-into-opendaylight-with-brocade/#respondSun, 02 Apr 2017 14:14:57 +0000https://www.dlineradio.co.uk/?p=524It’s not often that catch-up meetings with vendors lead to anything particularly exciting but a recent one I attended with Brocade was a little different. While Brocade have had Openflow support for a very long time, it was pointed out that teamed with OpenDaylight it might solve a few problems we’re looking to tackle.

While they do have their own packaged offering, I took the opportunity recently to start experimenting with what we can do in the open source version of OpenDaylight. After all, it’s free and the switches we operate already have support for it.

So… off to the lab and get plumbed in.

A laptop and MLXe plumbed in.

Ok, that rig might be a little overkill and a setup for the photo. Either way, I’m now working with the much smaller ICX 7450 units.

Before we get really into it, we ought to cover the components needed to make the whole thing work. We’ll need OpenFlow compatible switches to work with. That’s where the Brocade kit comes in. From memory, Comware 7 devices in the HPE world can also do the job but we’ll be concentrating on Brocade as that’s what my current employer runs.

We also need a controller of some form. That’s where OpenDaylight comes in. For my experiments, I’ve been running it on a laptop VM but any Ubuntu box 16.04 server should make a good starting point.

With a rough idea of what we’ll be setting up, we can now get into it. Let’s start with getting a controller running.

Grab yourself a copy of Ubuntu Server 16.04 (LTS). Configure it with a static IP address and ensure it has full access to your physical network. That means if you’re running the VM in VirtualBox, you’ll need to change the network interface mode from NAT to bridged.

With the server up and running, we can now look at installing the OpenDaylight controller. The basic installation is detailed on their website (here). You want to follow the steps for a DEB based installation.

Once you’re into the karaf console (the console used to manage the OpenDaylight controller), there’s a number of modules you’ll need to install. To get you going quickly, run the following command:

Some switches will need limits configuring but out of the box but you will be warned in those cases. Either way, all we’ve done so far is enable OpenFlow globally and set the IP address of the controller. We’ve also set the switch not to use SSL in the connection. This is fine in the lab, not so great in the real world.

The connection to the controller can be checked with the show openflow command.

Checking the status of OpenFlow on a switch.

The observant of you will see that I’ve already enabled OpenFlow on a couple of ports on this switch. To do this, run the following commands:

interface ethernet 1/1/1
openflow enable layer23 hybrid

You will likely get a scary warning when you do this. Heed the warning about FDP and more worryingly STP being disabled as part of this.

The switch warns us about OpenFlow vs FDP and STP.

This doesn’t happen on every switch but is worth noting. Another point to bear in mind is that on certain switches (e.g. MLXe), ports cannot be added to OpenFlow if they’re not on VLAN 1. The port(s) can however be added to the VLAN again after enabling OpenFlow.

Either way, we can now confirm that we’re getting data by looking in the OpenDaylight web console (http://[ipaddress]:8181/index.html) and selecting the Nodes option in the menu. If we’re successful, you’ll see something like this:

OpenFlow shows a live node.

Clicking on the “Node Connectors” link will show you the interfaces OpenFlow is enabled and reporting from. The flows link won’t show you any information. If you’re anything like me, that should strike you as a bit odd. Especially if you’ve got traffic flowing over the link.

Turns out you need to install flows in the switches to act on them. Otherwise, the traffic will be forwarded as per normal with little to no information coming back to the controller.

The simplest way to install a flow is through the Yang GUI. Before we go into it, you’ll need to make a note of the OpenFlow ID for the switch you’re interested in applying this to.

Armed with this information and now in the Yang UI, select opendaylight-inventory -> nodes -> node -> table -> flow from the API tree at the top of the screen. Now change the HTTP action from GET to PUT and enter the following information:

You need to be careful that the table ID and flow ID values match in both the JSON (or XML) and the URL.

A couple of other points worth noting at the flow ID and the flow table. The former is simply a unique ID for each flow in a table on the switch. The latter is a series of flow tables that a packet can be processed through as part of the complex chains you can build up in OpenFlow.

As you can see in the screenshot below, it’s possible to add a lot more options for selecting what matches a flow you want to take action on. This means that you could be incredibly specific about what you want to forward, drop or even redirect.

Some of the flow match options in OpenDaylight/OpenFlow.

The way I had it sold to me was you could selectively bypass a firewall under controlled circumstances. A bit of a brave option but could get you out of a sticky spot.

Assuming you hit post and everything is a-OK, you should now have a flow programmed in the OpenFlow database. Ideally, this flow should now be programmed in the switch as well. Except it isn’t.

A quick check with a packet sniffer shows the switches and controller are talking. So why isn’t the flow being installed?

Unfortunately, this is where I ran out of time to look at this issue but I will update if I get time to experiment. That said, my speculation would involve the hybrid port feature on the Brocade switches. It’s an option I had to enable to allow basic routing between the two switches to work once OpenFlow was enabled.

Either way, the technology looks interesting. I guess I’ll just need to spend a bit more time with it and hopefully be able to build the slightly clever mirror port feature I need for an upcoming project.

]]>https://www.dlineradio.co.uk/articles/first-steps-into-opendaylight-with-brocade/feed/0Logging Failure Reasons in FreeRADIUShttps://www.dlineradio.co.uk/articles/logging-failure-reasons-in-freeradius/
https://www.dlineradio.co.uk/articles/logging-failure-reasons-in-freeradius/#respondTue, 21 Feb 2017 12:04:33 +0000https://www.dlineradio.co.uk/?p=517FreeRADIUS is an incredibly powerful RADIUS server and a tool I’m currently taking through a proof of concept as a possible replacement for Microsoft NPS. While the debug output of FreeRADIUS is fantastic in an interactive console, it doesn’t quite log everything to disk, syslog or SQL in normal operation.

One of the useful bits of information it would be nice to get is the failure reason logged in the radpostauth table in MySQL. That would make it much easier to trace why users are struggling to log into the service. It’s also assisting right now with some troubleshooting.

Making it happen isn’t complicated. We need to add the extra column to log the failure reason messages in to on radpostauth. To do this, run the following command in MySQL:

ALTER TABLE radpostauth ADD COLUMN message TEXT;

While that adds a place to log the data, we still need to add tell FreeRADIUS to enter the data in the new field. To do this, we edit the /etc/raddb/mods-config/sql/main/mysql/queries.conf file to update the post-auth query. The result should be something like this:

post-auth {
# Write SQL queries to a logfile. This is potentially useful for bulk inserts
# when used with the rlm_sql_null driver.

Restart FreeRADIUS to make the changes. The result of all this work should be something like this:

]]>https://www.dlineradio.co.uk/articles/logging-failure-reasons-in-freeradius/feed/0Moving a Point-to-Point Microwave Linkhttps://www.dlineradio.co.uk/articles/moving-a-point-to-point-microwave-link/
https://www.dlineradio.co.uk/articles/moving-a-point-to-point-microwave-link/#respondTue, 21 Feb 2017 11:12:06 +0000https://www.dlineradio.co.uk/?p=512At one station I volunteer for, we use a 1.5GHz microwave link to connect between the studio and the transmitter site on top of a hill nearby. Now I should stress that microwave link doesn’t mean digital. It’s a one-way FM link where, with the multiplexed baseband signal for transmission generated at the studios. The up shot is that while the link itself is expensive, the remote equipment is simple – receive on 1.5GHz, demodulate and re-transmit on the broadcast FM frequency.

The down side is that we get little telemetry from the unit. We can see SWR, signal and other transmission metrics on the units but there’s no reporting back from the remote end. That said, once it’s operational, there should be very little maintenance required.

Due to the building we hang the studio end antenna on being renovated, and at the council’s request, we were tasked with moving the link with as little outage as possible. Not much of an issue with it being a short distance move but we did learn a few lessons.

First of which is that these analogue microwave links are rather robust. We were allowed a fair bit of movement as we unbolted the “mast” from the rusting brackets. Though it did result in us holding the pole in place for a number of minutes while an ad break played out!

Another lesson was that there’s not much to these – get the right co-ax (50 ohm, RG-213 or better) and terminate it properly – there’s little to go wrong. Remember to weather seal it properly and it should be good for years with little active work.

Rough line-up was simple, keep twisting the pole until we got something on air. Final line-up involved a visit to the remote end and keeping an eye on the signal strength meter built into the receiver. That visit also allowed us to look into coverage problems that had been reported by some volunteers. We were able to confirm the correct operation of the transmitter and even give the antenna system a clean bill of health.

The one thing that did catch us out on this job was the simplest part – moving a minidish used to pick up IRN. We had real problems trying to line it up and even started coming to the conclusion that the LNB had been damaged in the move. Turned out to be a short length of co-ax used to connect the signal meter had shorted.