A couple days ago I learned that when a fortigate nears 70% memory usage, it will start shutting down services. The first thing it kills is the SSL portal, and the 2nd thing it kills is the SSL VPN. We've been having issues where people will get bumped off the vpn and can't reconnect for 15 minutes to an hour. It was happening rarely, logs wouldn't show any issues, so it was always bumped down the troubleshooting line. We had it bad this past Monday and no one could connect. It cleared up after 45 minutes or so and we did the usual; supply logs and config to fortinet. The issue came back Tuesday morning so I called in to escalate. I get a nice dude and we set up a webex session. He heads straight for memory usage. Ah-ha! The usage is at 67%, says he. He hits the console, lists the processes, and Ifcron is using 30% memory and nil cpu. He kills it and bingo, all the remote users are logging back in through vpn and the portal is back up. He admits there are a couple known memory leaks that were fixed in v4 mr3 patch11 (we are on patch7). We update the 310B in dev and it's been happy. We'll do prod next week. One thing to mention though, the ssl vpn config changes a bit and we had to recreate them when we updated in dev.

That following a vendors recommendation to set IOPs to 0 with a VMware RobinRobin config can lead to 1/2 the performance of when it was set to 1000...so much for a performance boost.

Depends on the workload, the number of paths, very heavily on the array type, etc.

The default settings are abysmal for anything other than light IO, or sequential IO. I know in one of my installs, the throughput of small, random IO in a high load scenario going to the same volume is about 7x with having each request go down a separate path. If you have, say, one VM doing small, random IO with a queue depth of only 1, yes, it will be a detriment, but in every other scenario, it's at worst a minor loss, or at most a major gain. Again, based on array, and health of your storage network, YMMV.

That following a vendors recommendation to set IOPs to 0 with a VMware RobinRobin config can lead to 1/2 the performance of when it was set to 1000...so much for a performance boost.

Depends on the workload, the number of paths, very heavily on the array type, etc.

The default settings are abysmal for anything other than light IO, or sequential IO. I know in one of my installs, the throughput of small, random IO in a high load scenario going to the same volume is about 7x with having each request go down a separate path. If you have, say, one VM doing small, random IO with a queue depth of only 1, yes, it will be a detriment, but in every other scenario, it's at worst a minor loss, or at most a major gain. Again, based on array, and health of your storage network, YMMV.

Every array has different queue designs, 3Par says to set it to 100 for instance because their testing has shown that's the best balance of performance and resource utilization on the controllers. Now, if I was seeing 1/2 the performance I'd be a bit weirded out because that's a huge decrease. I know that prior to 4.1U1 there was a bug that would reset any custom value to like 1 million after a reboot.

Works for me. At previous companies, we rebooted servers during the regular maintenance window even if we didn't need to, so people got used to the maint window. At the company I work at now we have a strictly defined window documented in umpteen places but we can't use it because we never have and it might freak everyone out. Ugh. This is why we have routers with 7 years uptime...

But at least it just uses ActiveSync now, no more CdoMAPI issues with the Exchange servers.

True enough, still, they could have made it nicer surely?

I think they're too busy trying not to die as a business

Seriously, I expect the next version to be spiffier, assuming their market share stabilizes.

So I got BES10 working - as a note, once you build the server and have it running it needs to be up for around 12-24 hours before it can activate devices - and hooked up a Z10 to it with Blackberry Balance setup.

I know this might be blasphemy, but I like the device. Its actually a really good device to use, so far. And the BB Balance stuff is actually pretty cool.

No co-existence work needed. Since they've gone to ActiveSync you just stand up the server, configure the SRP info etc and you can start using it. I didn't notice any delay in being able to activate a new device. The UDS server for Andriod and iOS is a little more complicated because you need some Apple certificate but fairly straight forward.

No co-existence work needed. Since they've gone to ActiveSync you just stand up the server, configure the SRP info etc and you can start using it. I didn't notice any delay in being able to activate a new device. The UDS server for Andriod and iOS is a little more complicated because you need some Apple certificate but fairly straight forward.

I havent setup the unified management, which is what I presume sryan2k1 is asking. But you do need 2 servers if you want to use BDS and UDS, unless you wanna fuck around changing ports (see this article here).

Once again, I have proved to myself that a 4AM window for upgrading stuff is a bad idea. Scheduled with a customer to do upgrades on their server in conjunction with their developer. Dev was to do 'stuff' in preparation starting at about 4AM to 5AM. I was to come in at 5AM and do the last bit and flip a switch on some network gear. I show up online at 5AM and there is no indication that the Dev has done any of his 'stuff'. Wait wait wait.

5:20AM rolls around and I start to panic a bit. Start looking for phone numbers to call and pull up the google doc spreadsheet we collaborated on the day before to build the schedule of tasks. He has been trying to chat with me through the google doc interface for the last 1.5 hours. He understood he was to wait for me before doing anything, I thought I was just showing up at the end to do my stuff after him.

I copied the google doc into my own tools so I could put all my own notes and reminders on it without screwing up his stuff so I had the google doc closed until I went back to double check it. So the entire maintenance window passed with him sitting fuming that I had 'not showed up' and me trolling around the server over and over waiting for him to move around the data he needed to fiddle before I did my stuff.

Gah. Of course, he didn't think to just update the ticket that he knows would have hit my phone and I didn't think to just call him. Early morning brain dysfunction. Luckily it is not a critical thing, just capacity upgrade and it is already about 4 months behind schedule because of development stuff so... rescheduled for next week.

Gah. Of course, he didn't think to just update the ticket that he knows would have hit my phone and I didn't think to just call him. Early morning brain dysfunction. Luckily it is not a critical thing, just capacity upgrade and it is already about 4 months behind schedule because of development stuff so... rescheduled for next week.

You can look at doing some sort of MOP or other online tracking, so that you can lay out discrete tasks, or at least milestones and handoffs, so that there's less doubt about who did what. Of course, having a conf call would also work for the at-the-moment issue.

Gah. Of course, he didn't think to just update the ticket that he knows would have hit my phone and I didn't think to just call him. Early morning brain dysfunction. Luckily it is not a critical thing, just capacity upgrade and it is already about 4 months behind schedule because of development stuff so... rescheduled for next week.

Gah. Of course, he didn't think to just update the ticket that he knows would have hit my phone and I didn't think to just call him. Early morning brain dysfunction. Luckily it is not a critical thing, just capacity upgrade and it is already about 4 months behind schedule because of development stuff so... rescheduled for next week.

There isn't any IM at your company?

I didn't catch this either : The other guy is a developer at the customer's company, not a co-worker of Paladin.

Gah. Of course, he didn't think to just update the ticket that he knows would have hit my phone and I didn't think to just call him. Early morning brain dysfunction. Luckily it is not a critical thing, just capacity upgrade and it is already about 4 months behind schedule because of development stuff so... rescheduled for next week.

Next time use a conference call. Even if he sits around for 60 minutes until you show up, at least you're going to show up. I even put up conf calls when I'm the only one doing work, if something goes south mgmt knows how to get a hold of me and I don't have to shuffle lines when more than one person starts prodding.

...when searching for esxi v5.1 vswitch setups, ran across a blog showing detailed and interesting designs but with accented English. First I thought, huh someone copied a blog. Then I hit the about me page.

VCDX #77

That would explain the detail

Great stuff. Lots of great scenarios and info layout. It's exactly WTF I've been looking for.

And then the little tidbit about esxi v5.1 allowing you to restore dvswitches to new DB in a DR scenario. I did not know that. Another reason to push to upgrade ASAP.

EDIT 4: Yes, reading while watching (kinda) chinese action movies or reading other sites seems to be the thing to do. Obsessive study for the rest of my life is probably the name of the game if I'm going to remain in IT.

EDIT 5: Seems the thing to do (for me) is to take Paul Kelly (vrif.blogspot.com)'s 10GBe VSS design and tweak it to add multi-path I/O for iSCSI storage.

But do I then need to add a second management network IP or just forget about that and ignore the warning? I think I can forget about it in his design.

After I upgrade to 5.1, then I flip to his DVS design and use LBT and NIOC (my storage doesn't support SIOC).

TIL that apparently the head of the office I work at things it's a great idea to print 1000 invoices on some random employee's inkjet printer rather than on the Copier or Color Laser, because it "saves time and therefore money" for him to print everything at his desk instead of having to walk 3 feet to the copy room.

I work for a lawfirm, usually at any given point I could trip and hit my head on 4 printers on the way down. If one of those is offline/broken people freak the fuck out. Walking (in some cases) literally < 5 feet to another printer isn't acceptable.

I work for a lawfirm, usually at any given point I could trip and hit my head on 4 printers on the way down. If one of those is offline/broken people freak the fuck out. Walking (in some cases) literally < 5 feet to another printer isn't acceptable.

I hate that crap. It makes me miss my time at Cisco, two to four HP 8100's per floor and one color laserjet per floor (we also had a dye-sub and color wax tektronics for the marketing guys to do pre-press work) and that was it. At my current place we have oodles of very high speed MFP's and if it's more than ~30' to a copy room there's usually a medium speed black and white printer but still we have dozens of desktop printers. The cost per page over the life of those small printers has to be at least an order of magnitude higher than either of the other solutions, I can't understand why management lets the idiocy persist.

I work for a lawfirm, usually at any given point I could trip and hit my head on 4 printers on the way down. If one of those is offline/broken people freak the fuck out. Walking (in some cases) literally < 5 feet to another printer isn't acceptable.

I don't have a problem with that. I get it that people are busy and need to save time. What kill me is that they didn't take into account the cost per page. They think that not spending money up front on a laser is going to save them money. Given the amount of printing they want to do, I highly, highly doubt it.

I work for a lawfirm, usually at any given point I could trip and hit my head on 4 printers on the way down. If one of those is offline/broken people freak the fuck out. Walking (in some cases) literally < 5 feet to another printer isn't acceptable.

I hate that crap. It makes me miss my time at Cisco, two to four HP 8100's per floor and one color laserjet per floor (we also had a dye-sub and color wax tektronics for the marketing guys to do pre-press work) and that was it. At my current place we have oodles of very high speed MFP's and if it's more than ~30' to a copy room there's usually a medium speed black and white printer but still we have dozens of desktop printers. The cost per page over the life of those small printers has to be at least an order of magnitude higher than either of the other solutions, I can't understand why management lets the idiocy persist.

People get very possessive about "their" equipment. Plus workplace cultural inertia and outright resistance to change.

We employe Oce (Canon Business Process Services now) in all of our facilities. We (HQ) have a gigantic copy center on the 24th floor that is staffed all of the time, along with them coming around and doing our interoffice mailings.

I assure you, having to walk 4 feet to get to a printer is too much for these people. No joke, at one point someone said "I refuse to share with XXX, I need my own"

It makes me miss my time at Cisco, two to four HP 8100's per floor and one color laserjet per floor (we also had a dye-sub and color wax tektronics for the marketing guys to do pre-press work) and that was it.

Cisco famously rationalized their whole printing architecture from a clean-sheet. Everyone should; I've had great success with standardizing on IPP, even using downloadable Microsoft drivers for support back to Windows 98. OS X, *BSD and Linux support IPP natively with CUPS, of course.

It makes me miss my time at Cisco, two to four HP 8100's per floor and one color laserjet per floor (we also had a dye-sub and color wax tektronics for the marketing guys to do pre-press work) and that was it.

Cisco famously rationalized their whole printing architecture from a clean-sheet. Everyone should; I've had great success with standardizing on IPP, even using downloadable Microsoft drivers for support back to Windows 98. OS X, *BSD and Linux support IPP natively with CUPS, of course.

CEPS was the bomb, we lost our local print server once and the only thing that happened was that jobs took a bit longer because they had to spool to the server at the larger office we were associated with (DS-3 connection). That server took over the personality of our server including queues and everything and the local traffic manager did the IP redirection. It was bar none the slickest print system I've ever seen. Of course it was born out of necessity, there were two global print admins for a company of 45k employees! Oh, and printing was mission critical because if labels didn't print with MAC addresses and serial numbers stuff couldn't ship.

I work for a lawfirm, usually at any given point I could trip and hit my head on 4 printers on the way down. If one of those is offline/broken people freak the fuck out. Walking (in some cases) literally < 5 feet to another printer isn't acceptable.

Last place I worked at was like this too, we had four big MFPs and one mid sized one, for ~80 on site staff. Three of them were with in a 15m radius of each other. If one went down, or or ran out of toner, holy hell was raised by people who had to walk an extra 5 meters. Accounting had four printers just amongst themselves, one being one of the MFPs. Rather than taking the 5 minutes of training to learn how to print to hold and then enter your password on the printer to release you print jobs they insisted on the small desktop lasers "for confidentiality". Only one was needed for the specialized toner for cheques.

Gah. Of course, he didn't think to just update the ticket that he knows would have hit my phone and I didn't think to just call him. Early morning brain dysfunction. Luckily it is not a critical thing, just capacity upgrade and it is already about 4 months behind schedule because of development stuff so... rescheduled for next week.

Next time use a conference call. Even if he sits around for 60 minutes until you show up, at least you're going to show up. I even put up conf calls when I'm the only one doing work, if something goes south mgmt knows how to get a hold of me and I don't have to shuffle lines when more than one person starts prodding.

Yup that is the plan. I have done a lot of work with this customer so basically we got lazy. We usually do a conference call and sort of just say hi, then do the stuff and have a bit of back and forth at the end so this time we tempted fate because we thought we had things mapped out so well. Of course we were rewarded for our hubris. We already have the conference setup for next time.

It makes me miss my time at Cisco, two to four HP 8100's per floor and one color laserjet per floor (we also had a dye-sub and color wax tektronics for the marketing guys to do pre-press work) and that was it.

Cisco famously rationalized their whole printing architecture from a clean-sheet. Everyone should; I've had great success with standardizing on IPP, even using downloadable Microsoft drivers for support back to Windows 98. OS X, *BSD and Linux support IPP natively with CUPS, of course.

CEPS was the bomb, we lost our local print server once and the only thing that happened was that jobs took a bit longer because they had to spool to the server at the larger office we were associated with (DS-3 connection). That server took over the personality of our server including queues and everything and the local traffic manager did the IP redirection. It was bar none the slickest print system I've ever seen. Of course it was born out of necessity, there were two global print admins for a company of 45k employees! Oh, and printing was mission critical because if labels didn't print with MAC addresses and serial numbers stuff couldn't ship.

OMG, I need to research this. Printing in our hospitals is out of control. Automatic job redirection? Nirvana! Since my VP owns the desktop as well as the servers, we might be able to make this happen.