Monthly Archives: March 2011

Post navigation

If I asked you to click on the following icon, what would you expect it to do?

Odds are good that you said “Save whatever I’m working on”. A thought struck me a few months ago when I was working in an unfamiliar GUI and needed to save my config before I went to lunch. I didn’t know what the keyboard shortcut was, and the buttons didn’t have text labels. Instinctively, I looked for the ubiquitous floppy disk icon, and sure enough it saved the config.

If you’ve been in technology for a while, you know what that picture is. More importantly, you recognize the metaphor that it represents. When we click on that icon, whether it be in MS Office or any other GUI toolbar, we expect that icon to save our document or our configuration or whatever the focus of our efforts might be. It’s a metaphor that has been drilled into our heads ever since we first started “saving” documents to a 3.5″ floppy disk back in DOS (or other older OSes). For grins, why don’t you see if you can still buy a 3.5″ floppy disk. Go ahead, I’ll wait…

Back so soon? Couldn’t find one, could you? Floppy disks have gone the way of their larger 8″ and 5.25″ cousins. They are practically impossible to find. I still carry a couple in my backpack for strange emergencies, but they are starting to get old and unreadable. I still have a 3.5″ drive on my desktop PC, but it’s more for legacy reasons that anything else. The last thing I even used it for was a BIOS update, but since those are done in the OS now, I don’t even need it for that. I remember the uproar a few years ago when PC manufacturers started removing them from systems. Now, finding a drive is next to impossible.

Now extend that idea further. I have a young son who has never seen a floppy disk. How am I supposed to explain to him what that little icon means? He’s going to grow up saving documents to USB drives or network shares, or even…the cloud. He’ll know what that button does because he will be taught to save his documents with it. He won’t understand why that icon means “save” though. He won’t remember the sound of the drive head skittering across as it reads and writes data. He won’t hear the anguished sound of a drive that is reading a bad disk screeching over and over as it tries to find data that isn’t there. He won’t shout in anger as he finds out the file he needs to save is 1.6MB and he can’t save it to the disk.

So, for the upcoming generation, do we need to change our save metaphor? Should we change the icon in the next version of Office something that newer users can relate to? Maybe a safe icon? In most applications I’ve used, as safe icon is used to denote a backup option. Or how about a cloud? I don’t know about you, but teaching my users that preserving their data involves launching it into the Great Unknown Cloud doesn’t necessarily sit well with me. Maybe a USB drive, which has now become the de facto portable storage device? My issue with that is the icon wouldn’t be distinctive at smaller resolutions. Add in the fact that USB drives don’t come in one universal size or shape, and you might just end up confusing your users.

Tom’s Take

Maybe we don’t need to change our metaphor for saving documents. Most people today understand what happens when they push the little blue square with the little white square inside it. I’m afraid that we will eventually reach a point where the context behind the icon will be long gone and people will just be pushing it because they know that if they do, the term paper they’ve been writing doesn’t go KABOOM! Most of the menus have already done away with the floppy disk metaphor, and power users know that CTRL+S will accomplish the same thing. Until we can find something universal that speaks to everyone and says “Click me to save your document”, I suppose we’ll have to carry on with our little floppy disk. Just remind me to put one in a museum somewhere so I can show my son what Daddy used to keep his WordPerfect for DOS documents on.

Advertisements

Share this:

Like this:

One of the many wonderful things I get to do at $employer is work on voice systems and convincing my customers to move from old clusters of analog trunks to new, shiny Primary Rate Interface (PRI) trunks to carry their calls. PRIs are wonderful things, capable of taking up to 23 calls at a time, providing calling party and called party information, and dispensing with the need to have kludgy “rollover” analog trunks. However, in my experience with turning these circuits on, the worksheet the telco provider sends out tends to look like speaking Greek to most network enginee…rock stars. It took a while for me to figure out what all the obscure acronyms meant, since the telco just assumed that I knew what they all stood for. In an effort to provide help to my readers that may not be telco people, or might be getting forced into working on a PRI worksheet, I thought it might be helpful to provide some translations.

PIC/LPIC – Probably the most confusing acronym out of the bunch. PIC stands for Primary Interexchange Carrier. This is your long distance carrier. This is a code that is kept in a database and when you need to make a long distance call, the telco consults this database to know whose network to send the call along. A great explanation of long distance calls can be found HERE. Conversely, the LPIC is the Local Primary Interexchange Carrier. In other words, they are the company that handles your local calls that aren’t long distance. These two providers can be different, and in many cases they are. In rural areas, the LPIC is the local telco, and the PIC is a larger carrier like AT&T or Verizon. I’ve found that many companies will give you a deal if you specify them for both PIC and LPIC. Most of the time, the PIC/LPIC choice will be whomever is installing the PRI for you, such as AT&T or Cox Communications.

DID – Another one that confuses people. In this case, DID stands for Direct Inward Dial. This is a huge change from the way an analog circuit works. With an analog circuit (like my house), when you call my number it sends an electrical signal along the wire telling the device at the other end to ring. When we hook this circuit up to a CUCM/CCME system, we usually have to configure Private Line Automatic Ringdown (PLAR) in order to be sure something gets trigger when the electrical signal arrives. A PRI doesn’t use electric signals to trigger ringing. Instead, they are configured with two different fields, the Calling Party and the Called Party. In this example, the Calling Party is what is most often referred to as “Caller ID”. The Called Party on a PRI is the DID. This is a number that is delivered to the PRI and sent to the PBX equipment on the other end. The name comes from the fact that these numbers are most often used to directly reach internal extensions without the need to reach a PBX operator or automated attendant. The DID can be configured to ring a phone, a group of phones, or even a recording. The numbers that used to belong to your analog circuits will usually be moved over to a group of DIDs and pointed at the PRI.

Outpulsed Digits – This one sounds straight forward. Digits are being sent somewhere, right? Remember that this worksheet is from the perspective of the service provider, so the outpulsed digits are what the provider is sending to your equipment. You have tons of options, but most providers will usually limit your options to 4, 7, or 10 digits. This is the number of digits that you get from the PRI to determine where your calls get sent. Since I’m a big fan of using translation patterns on my systems to send the digits around, I tend to pick 7 or 10 digits. In areas like Dallas, you may be forced to take 10 digits, as most metro areas are now mandatory 10-digit dialing. This also helps me avoid dial plan collisions when a phone number for a site is the same as a 4 digit extension internally. If I get 7 digits coming from the PRI, I can be pretty sure that none of my extensions will have the same number. If you don’t want to configure translation patterns and have a lot of DID numbers that correspond to phone extensions, you may want to consider a 4-digit outpulse setup from the telco.

NFAS – This one I don’t use very often, but it might come up. NFAS stands for Non-Facility Associated Signaling. This is used when you have more than one PRI configured in your environment. With a 24-channel PRI, 23 of those channels are used to provided data/calls. These are bearer channels or B-channels. The 24th channel is used to send control and signalling data. This is the Data Channel or the D-channel. When you configure your environment with multiple PRIs, you have multiple D-channels to provide signalling. However, you can pay a premium for each of those D-channels. In an effort to save some money, the idea of NFAS allows one D-channel to provide the signalling for up to 20 PRI lines. The catch is that if the D-channel goes down for any reason, so does the signalling for all the PRIs participating in the NFAS setup. Usually, if you designate NFAS on your worksheet, the telco will make you choose whether or not to have a backup D-channel. This is a good idea just in case, because you can never go wrong with a backup.

Station Caller ID – I include this one because of more than one issue I’ve gotten into with a telco over it. Like, a full-on yelling match. If you are given the option of using the station ID as the outbound caller ID, use it. You have much more control over how the caller ID is represented inside of CUCM than you do if you the telco takes over for you. If you don’t use the station ID as the caller ID, they will usually use the first DID number in your list, or set it to the billing number of the main telephone line. As most PRIs I setup are usually for multi-site deployments, this creates issues. People see the caller ID of the headquarters or the administration building instead of the individual unit number. They call that number back expecting to get their child’s school (for instance), but instead get the board of education building. Some telcos will go to war with you about the inherent danger in letting the user specify their station ID for use with emergency services like 911 or 999. I usually tell the telco rep to get stuffed, since my route lists will get the Caller ID more correct that their ham-handed attempts to just slap a useless billing ID number on the PRI and call it good. If they pick a DID number that doesn’t appear in the phone book or in the PS/ALI database for the local emergency service provider, then you can get into a liability issue. Better to just check the “station ID” box and build your system right.

Tom’s Take

These were the most confusing parts of the PRI worksheets that I’ve filled out from multiple providers. I hope that my explanations help if and when you need to fill out your own sheet. If it saves time having to Google what LPIC and NFAS mean, then I’ll sleep happy knowing that you were able to conserve some of your Google-fu.

Share this:

Like this:

A few weeks back, I got a sneak peak at the new VDI Dashboard product from Xangati. They had given us a very quick overview of it at Tech Field Day 5 but I got a special one-on-one opportunity to get a product demo. What follows is information about what I saw.

With virtualization become such a hot topic in today’s IT environments, it’s only natural that people want to extend the benefits of centralized management and reduced hardware expenditure costs to the desktop level as well. VMware is accomplishing this through the Virtual Desktop Infrastructure (VDI), which allows end user desktops to be virtualized and loaded on less powerful hardware. The main processing is done on the back end by the vSphere for Desktops servers and presented to the users via PC over IP (PCoIP). This allows the user to experience the same desktop they would normally have, but make it portable across a variety of devices. This kind of reminds me of the ultimate extension of a roaming profile, only in this case the profile is your whole computer.

This process isn’t without issues, though. Before, the network was merely a transport medium for data moving from PC to server or PC to the Internet. However, when you abstract the operation of a PC to the point where it requires the network to operate, there can be an entirely new set of variables introduced into the troubleshooting process. Even things that we might normally take for granted, like watching a video, become bigger issues when the network is introduced as a medium for transporting all the data to a user endpoint. Factor in that the virtual team is usually not integrated with the network team, and you end up with a situation that often results in finger-pointing and harsh words. What’s needed in the ability to gather information quickly and easily and display it in an easy-to-read format for the team that might be troubleshooting the issue. Enter Xangati and their VDI Dashboard:

This product gathers information from various different points in your VDI as well as your network and displays it in easy to decipher graphs and tables. For those in more of a hurry, the health index at the top allows at-a-glance digestion of the overall health of the VDI system. When everything is working as it should, this number will be nice and green. once problems occur and monitoring thresholds are triggered, the color will go from worrisome yellow all the way to problematic red. This all occurs in real time, so you can keep up with what goes on as it happens. This is useful if you have a group of people that all come to work at the same time and spool up 10 or 20 new VDI systems as they log on for the day. You can view the impact this has on your VDI and network from the dashboard. You can also see when a user may have an adverse impact on the system from doing something they consider innocuous, such as watching an HD video and consuming much more PCoIP bandwidth than their non-video neighbors.

In addition, the DVR-like functionality present in Xangati’s other products is extended here as well. You can “rewind” the view to a point where the problems started occurring and begin troubleshooting from ground zero. This is a decided advantage because as busy network rock stars, we aren’t always staring at our Single Pane of Glass (SPoG) when a problem happens. The ability to backtrack and see all the events leading up to the problem gives us the ability to take decisive corrective action quickly and efficiently.

Tom’s Take

I don’t have a large VDI setup to manage, but if I did I would consider the VDI Dashboard closely. It’s got a great view for all the things that could cause your deployment to go haywire. Easy to read with tons of great information about all the individual components that comprise the total VDI, this tool makes it very simple to diagnose issues and take corrective steps quickly to limit impact on your users. I haven’t played with it myself, but what I’ve seen makes me happy to know that when my users reach the point where I need to virtualize their Facebook Interface Terminals and LOLCat Creation Devices, I can count on Xangati and their VDI Dashboard to give me up-to-the-minute information.

Xangati gave me a one-on-one presentation prior to the release of their product and provided me with a press kit containing the image above. I was under no requirement to write an article describing my briefing. The opinions and views expressed in this review are mine and mine alone.

Share this:

Like this:

When you setup your first Cisco Unified Communications Manager (CUCM) server, you’ve got a lot of programming to do. You have to program phones and partitions and calling search spaces. You have to worry about gateways and route patterns and voice mail. Many times, the default settings in the setup will be more than sufficient to get you up and running quickly. However, there is one default that you must avoid no matter what. The dreaded <none>.

You see, when you configure a directory number (DN) on a phone, the default partition for this number is a special partition labeled <none>. None exists on the system mostly as a placeholder, a catch-all for devices without a home, much in the same way Uncategorized is the default category for posts on my blog. Normally, <none> isn’t much of a bother. I ignore it almost entirely. However, in situations where I’m forced to deal with it, I start wanting to pull my hair out.

<none> interacts with the system in some pretty strange ways. By rule, when you configure a DN, it can call other DNs in the same partition (provided the calling search spaces match). As long as all your devices exist in the same partition everything is great. However, much like creating a large network with only one VLAN, creating a phone system with only one partition can lead to problems down the road. What if you want your voice mail system segregated from certain phones? One of my other favorites is the executive that only wants his phone to be dialable from certain extensions on the system. In order to accomplish these things (and more), you are going to need to create additional partitions. And the second you do, the <none> problem becomes a real hassle.

<none> is actually a null partition. It doesn’t really exist in the system, so it can’t be assigned or removed from any calling search spaces (CSS). This means that <none> exists in EVERY CSS. If a phone or gateway is located in <none>, any partition on the system will be able to dial it. However, the phones located in <none> won’t be able to dial any other partitions. You could create a special CSS to allow it to dial other partitions, but you’ll never be able to make the phone non-dialable. No matter what, every search space created will be able to dial that phone because every CSS has the <none> partition listed as an unlisted member, kind of like the understood “deny” statement at the end of an access list.

The best thing to do is create two different partitions for your internal devices. One, which I call “InternalDN”, is where all your phone’s directory numbers go. If you are creating partitions for multiple groups for a multi-tenant cluster, you could give them more specific names like “InternalDN-CoA” and so on. Then, you create CSS groups that only allow phones in those partitions to call each other, but no one else. Then, you put your devices that need general access to only the cluster, such as voice mail and gateways, in a partition name “ClusterOnly”. That way, you can remember to keep your DNs different from your VM ports, and you can restrict access to each as needed.

Tom’s Take

Don’t use <none>! I’ll come and slap you. Seriously, while it may be quick and easy to set up, if you keep using <none> for everything, it’s like building a house on quicksand. Sooner or later, you’re going to get sucked into a huge time sink to fix a strange problem that is going to require you to unravel your entire configuration to fix it. Better to split your phones and cluster resources into separate partitions and build it right the first time. Just pretend you’ve never even heard of <none> and all will be well.

Share this:

Like this:

The final Wirless Tech Field Day presenter was Fluke Networks. Fluke recently purchased AirMagnet, so I was curious about what they had to offer that was different from the AirMagnet presentation. I’ve used some tools from Fluke in the past and found them to be very handy, but since they shifted to a more hardware-oriented approach I hadn’t really kept up with things.

The presentation kicked off with Carolyn Carter, Product Manager for the Portable Network Tools division. She gave us an overview of Fluke and some of the tools that they offer. Owing to the fact that this was a wireless-focused event, she delved right into the AirCheck, a handheld wireless scanner. This product is designed to be used by a first level technician that would be sent to a site to do some preliminary investigation in order to get enough information to see if a site visit would be warranted. It’s a rugged little device, the trademark blue and gold coloring making it stand out anywhere you might accidentally leave it. As we dove deeper into the AirCheck, Carolyn handed the presentation over to Paul Hittel, Principal Engineer. Paul seemed a little nervous as he started into his presentation, probably owing to the fact that he is more of an engineer than a speaker. He fumbled with his notes a bit in the beginning and told a product story that went a bit longer than it should before it found the point. As an aside, I know exactly how Paul feels. I’m sure some of my co-workers are still waiting for my stories to get to the point.

Paul described how Fluke came up with the idea of a handheld scanner. Since wireless is such a tricky medium to work in and can be affected by any number of environmental factors, a site visit is often necessary to uncover additional details, such as a recently installed wireless video camera or a testy microwave that’s only on for 10 minutes a day. The wireless engineering staff is usually equipped to handle these kinds of spectrum challenges, whether they be armed with a Wi-Spy DBx or an AirMagnet Spectrum XT. However, these products are not usually deployed to the first level technical support staff, usually due to cost or unfamiliarity with their complexity. And since sending your top engineer on site to diagnose what could be a simple issue is an inefficient use of their time, Fluke decided to wrap the spectrum analyzer in an easy-to-understand handheld package.

The unit powers on in about 2-3 seconds and starts scanning the airwaves right away. It can detect access points (APs) and wireless networks. It displays the network type with easy-to-decypher symbols, denoting a/b/g/n and even 40MHz n networks. You can view which access points are broadcasting the networks, along with how many APs are detected for a given channel, which is very critical in the 2.4GHz space. There is even a simple location application available that plays a tone from an integrated speaker that increases in pitch the closer you are to a specific AP. It’s not exactly a bullseye, but it’ll help you find an AP that may be hidden under a desk or in a ceiling.

Rather than just telling us about how great this unit was, the Fluke team brought us each one to demo and play with. We walked around the room playing with the different options. Several of the delegates said they would be perfect for first tier support personnel in remote offices. One even remarked that what he thought was a little tinker toy in fact was a great tool for the segment at which it was targeted. This kind of hands-on demo is great for tools such as this because the “try it before you buy it” mentality is paramount to me in a hardware-based unit. By giving us the opportunity to walk around and put it through it’s paces at our leisure, I think the delegates were endeared to the tool a bit more than if they had only watched screenshots on a slide.

Alas, all good things must come to an end, and Carolyn needed all of the units turned back in, since they were merely loaners. However, she did say that she had one that she could give away. We each filled out a card and dropped it into the hat, and when Paul drew the name, mine came up! Yes folks, I am now the possessor of an AirCheck. I plan on letting my other engineers and technicians evaluate it to its fullest, and if nothing else I hope it gives me the opportunity to sit at my desk and write a few more blog posts rather than needing to drive out on site to find a fussy microwave.

Tom’s Take

Fluke makes great tools, there’s no denying that. I have a full wiring kit and telephone lineman’s set in my bag. I can now add an AirCheck to that same lineup. The rugged nature of their products means I don’t have to worry about dropping it. The AirCheck impressed me by not attempting to cram a wireless engineer into a plastic box. Instead, it’s a focused tool designed to lay some groundwork and assist the tier 1 helpdesk in determining if they need to get someone else involved in an issue. While Fluke can never be said to have the cheapest toys in the toy box, I think that the amount you invest in them can give you and excellent return in the time savings from unnecessary site visits for simple issues.

Disclaimer

Fluke Networks was a sponsor of Wireless Tech Field Day, and as such they were responsible for a portion of my travel expenses and hotel accommodations. In addition, I personally won an AirCheck evaluation unit from them in a raffle. At no time did they ask for nor were they assured any kind of consideration in the writing of this review. The thoughts and analysis herein are mine and mine alone. The thoughts are given freely and without reservation whatsoever.

Share this:

Like this:

The second presenter at Wireless Tech Field Day day 2 was AirMagnet. I’ve heard of them before in reference to their spectrum analysis products, and based on what I’d seen the day before from MetaGeek, I was interested to see how the Airmagnet product compared to them. I knew that the list price for the AirMagnet products was higher than that of MetaGeek, but I was sure that the differences in the two justify the price difference.

The presentation was kicked off by Bruce Hubbert, the Principle Systems Engineer for AirMagnet. He gave us a great overview of the AirMagnet product line. I never realized that AirMagnet had such a plethora of products dedicated to wireless scanning and design. These include AirMagnet Survey Pro, which is a very good tool used to design wireless networks quickly and easily. The tool looked quite detailed, with the ability to lay out your particular building or site maps and define what types of material it was constructed from, then tell the program to automatically lay out the access points based on frequency and coverage patterns. This would be a great tool for those that spend a great deal of time designing wireless networks for large sites. While it can’t replace a good old fashioned site survey, it can give the wireless engineer a great starting point for placement patterns.

Another program that AirMagnet is known for is AirMagnet WiFi Analyzer Pro. This tool allows an engineer to walk around with a PC Card adapter and perform in-depth site surveys. The tool can generate packets and measure the data rates on APs. The idea is that the engineer mounts the AP in a particular location or has it attached to a mobile cart and then generates packets and measures what kind of radiation pattern and data rates result. This is probably one of the most important tools to have for a wireless engineer to have in their toolbox for performing a thorough site survey.

The tool that we got the most interaction with was AirMagnet Spectrum XT. This is a full-featured spectrum analyzer designed to detect sources of wireless interference and and classify them to aid in troubleshooting wireless issues. It is quite similar to the MetaGeek Wi-Spy and Chanalyzer that we looked at the previous day, and the Spectrum XT software appears to have a similar feature set. What makes the difference in Spectrum XT is the integration that you get with the rest of the suite of AirMagnet tools above. You can use the spectrum analysis from XT to feed into the survey and design tools and give you a good picture of how to design your network to avoid interference sources such as microwaves, cordless phones, and unshielded audio systems. The delegates were provided with a copy of Spectrum XT along with a USB spectrum scanner to evaluate. Once curiosity that I asked Bruce about was the fact that all the spectrum analyzers I had seen required Windows as the operating system. Given that most of the delegates were packing MacBooks, I found it curious that more development wasn’t done for OSX. Bruce explained that this was due to the need for deep interaction with the wireless network card drivers to perform packet captures and analysis. He did say that many were working toward finding ways to integrate with OSX that didn’t involve the use of virtual machines inside Boot Camp/Parallels or VMware Fusion.

The final part of the AirMagnet presentation focused on their wireless intrusion prevention system (WIPS) products. The AirMagnet solution is designed to integrate with an existing deployment of APs and deliver independent intrustion protection as well as spectrum analysis from a dedicated platform. As the threats to wireless networks grow and their critical nature becomes more and more integrated into areas such as healthcare, the need to have a WIPS solution is very real. By augmenting your existing infrastructure with the AirMagnet solution, you can increase the coverage of any existing setups as well as providing a different detection vector to avoid being blinded by targeted exploits designed to eliminate a specific vendor’s equipment. The security mantra of “defense in depth” applies equally to both wired and wireless networks. I didn’t get a chance to really test the AirMagnet solution in great detail, but I will be sure to keep it in mind in the future in the event that a dedicated WIPS solution is called for.

Tom’s Take

I think AirMagnet has earned their reputation by making some great tools that provide wireless engineers and architects with the ability to design and troubleshoot wireless networks at a very high level. Some might argue that the pricing of their solutions is on the high side, but the counter to that is proved by the amount of detail that you get from their integration. The suite of AirMagnet tools isn’t for everyone, and may indeed be overkill for smaller deployments, but if you are beginning to design and deploy enterprise-grade networks, you can’t go wrong by looking at their products. The returns you gain with the expertise put in by years of research and development at AirMagnet will easily pay for the investment in short order.

AirMagent was a sponsor for Wireless Tech Field Day, and as such they were responsible for paying a portion of my travel expenses and hotel accommodations. In addition, they provided the delegates a package including an AirMagnet polo shirt and a copy of Spectrum XT with USB adapter for evaluation. At no time did they ask for nor were they promised any kind of consideration in this review. The analysis and opinions here are mine and mine alone. They are given freely and without reservation.

Share this:

Like this:

Day two of Wireless Tech Field Day started off with HP giving us a presentation at their Executive Briefing Center in Cupertino, CA. As always, we arrived at the location and then immediately went to the Mighty Dirty Chai Machine to pay our respects. There were even a few new converts to the the Dirty Chai goodness, and after we had all been properly caffeinated, we jumped into the HP briefing.

The first presenter was Rich Horsley, the Wireless Products and Solutions Manager for HP Networking. He spoke a bit about HP and their move into the current generation of controller-based 802.11n wireless networks through the acquisition of Colubris Networks back in 2008. They talked at length about some of the new technology they released that I talked about a couple of weeks ago over here. Rather than have a large slide deck, they instead whiteboarded a good portion of their technology discussion, fielding a number of questions from the assembled delegates about the capabilities of their solutions. Chris Rubyal, a Wireless Solutions Architect, helped fill in some of the more technical details.

HP has moved to a model where some of the functions previously handled exclusively by the controller have been moved back into the APs themselves. While not as “big boned” as a solution from Aerohive, this does give the HP access points the ability to segment traffic, such as the case where you want local user traffic to hop off at the AP level to reach a local server, but you want the guest network traffic to flow back to the controller to be sent to a guest access VLAN. HP has managed to do this by really increasing the processor power in the new APs. They also have increased antenna coverage on both the send and receive side for much better reception. However, HP was able to keep the power budget under 15.4 watts to allow for the use of 802.3af standard power over Ethernet (PoE). I wonder if they might begin to enable features on the APs at a later date that might require the use of 802.3at PoE+ in order to fully utilize everything. Another curious fact was that if you want to enable layer 3 roaming on the HP controller, you need to purchase an additional license. Given the number of times I’ve been asked about the ability to roam across networks, I would think this would be an included feature across all models. I suppose the thinking is that the customer will mention their desire to have the feature up front, so the license can be included in the initial costs, or the customer will bring it up later and the license can be purchased for a small additional cost after the fact. Either way, this is an issue that probably needs some more visiting down the road as HP begins to get deeper into the wireless market.

After some more discussion about vertical markets and positioning, it was time for a demo from Andres Chavez, a Wireless Solutions Tester. Andres spends most of his time in the lab, setting up APs and pushing traffic across them. He did the same for us, using an HP E-MSM460 and iPerf. The setup worked rather well at first, pushing 300Mbits of data across the AP while playing a trailer for the Star Wars movie on Blu-Ray at full screen in the background. However, as he increased the stream to 450Mbits per second, Mr. Murphy reared his ugly head and the demo went less smooth at that point. There were a few chuckles in the audience about this, but you can’t fault HP for showing us in real time what kinds of things their APs are capable of, especially when the demo person wasn’t used to being in front of a live video stream. One thing that did make me pause was the fact that the 300Mbit video stream pushed the AP’s processor to 99% utilization. That worried me from the aspect that we were only pushing traffic across one SSID and had no real policies turned on at the AP level. I wonder what might happen if we enable QoS and some other software things when the AP is already taxed from a processor perspective, not to mention putting 4-clients on at the same time. When I questioned them about this, they said that there were actually two processor cores in the AP, but one was disabled right now and would be enabled in future updates. Why disable one processor core instead of letting it kick in and offload some of the traffic? I guess that’s something that we’ll have to see in the future.

After a break, the guys from HP sat down with the delegates and had a round table discussion about challenges in wireless networking today and future directions. It was nice to sit down for once and have a discussion with the vendors about these kinds of topics. Normally, we would have a round table like this if a session ended early, but having it scheduled into our regular briefing time really gave us a chance to explore some topics in greater depth than we might have been able to with only a 5-10 minute window. Andrew vonNagy brought up an interesting topic about needed better management of user end-node devices. The idea that we could restrict what a user could access based on their client device is intriguing. I’d love to be able to set a policy that restricted my iPhone and iPad users to specific applications such as the web or internal web apps. I could also ensure that my laptop clients had full access even with the same credentials.

Tom’s Take

HP is getting much better with their Field Day presentations. I felt this one was a lot better than the previous one, both from a content perspective and from the interaction level. Live demos are always welcome, even if they don’t work 100%. Add to that the ability to sit down and brainstorm about the future of wireless and you have a great morning. I think HP’s direction in the wireless space is going to be interesting to watch in the coming months. They seem to be attempting to push more and more of the functions of the APs back into the APs themselves. This will allow for more decisions to be made at the edge of the network and keep traffic from needing to traverse all the way to the core. I think that HP’s transition to the “fatter” AP at the edge will take some time, both from a technology deployment perspective and to ensure that they don’t alienate any of their current customers by reducing the effectiveness of their currently deployed equipment. I’m going to be paying attention in the near future to see how these things proceed.

HP was a sponsor of Wireless Tech Field Day, and as such they were responsible for a portion of my travel expenses and hotel accommodations. In addition, they provided lunch for the delegates, as well as a pen and notepad set and a travel cooler with integrated speakers. At no time did they ask for nor where they promised any kind of consideration in the writing of this review. The analysis and opinions presented here are given freely and represent my own thoughts.