Thursday, December 29, 2011

The Active Directory UserMod Assistant is an awesome application for your end users to change their own AD profile data, such as telephone number or address. It's a free solution, and it's built on HTML and VBScript - so all the core requirements should be on your user's workstations. In addition, the program is not compiled, allowing easy editing. For example, we put extensions in the 'Business' telephone number field, and don't want to validate it.

I'm deploying it via shortcut to a shared folder (The shortcut is autocreated on the user's desktop if it doesn't exist on login), so users who need VPN access can add their mobile number(s) for use with Phone Factor.

Note - There may be Security Issues when running the application from a share that is not in your Trusted Sites.

Wednesday, December 21, 2011

I've been building an IT Department, and of course, one of the basic requirements is metrics and monitoring. I played with AlienVault for a bit, but it's more of a vulnerability management/log consolidation tool. I stumbled upon ZenOSS via Proxmox (it's an included VM), and seems to work quite well.

After some setup, I now get both WMI and SNMP notifications (like Nagios) from various devices. I also have traffic graphs (like Cacti) from SNMP for interfaces on those devices. Note - Exchange sucks, and when it crashes I get no notifications. Time to get the Qmail SMTP relay in place for reliable email delivery.

What I was missing was NetFlow data. Not a big deal - throw a OpenVZ Debian VM onto Proxmox and update it with nfdump and nfsen (modified from http://www.linuxscrew.com/2010/11/25/how-to-monitor-traffic-at-cisco-router-using-linux-netflow/ )

In order to continue you should edit file etc/nfsen.conf to specify where to install nfsen, web server’s username, its document root directory etc. That file is commented so there shouldn’t be serious problems with it.

One of the major sections of nfsen.conf is ‘Netflow sources’, it should contain exactly the same port number(s) you’ve configured Cisco with — recall ‘ip flow-export …’ line where we’ve specified port 23456. E.g.

In case of success you’ll see corresponding notification after which you will have to start nfsen daemon to get the ball rolling:http://www.blogger.com/img/blank.gif/path/to/nfsen/bin/nfsen start

From this point nfdump started collecting netflow data exported by Cisco router and nfsen is hardly working to visualize it — just open web browser and go to http://linux_web_server/nfsen/nfsen.php to make sure. If you see empty graphs just wait for a while to let nfsen to collect enough data to visualize it.

Friday, December 16, 2011

So I ran some quick dd-based write numbers before choosing a filesystem for my new Proxmox based virtual server. I really wanted ZFS, but I need OpenVZ and KVM more, and ZFS-Fuse just does cut it.

In short - ext4 is the winner, though ZFS sure looks pretty damn fast. SolarisInternals has more info on ZFS write speed

Another thing to note, though the below numbers for ZFS-Fuse don't reflect it, is that ZFS performs better with Write-Back cache off on your Raid controller. Had I tested OpenSolaris with Write-Back cache on (boot up was ungodly slow with it on), I'm sure it would have been reflected in the numbers.

Tuesday, November 29, 2011

So I'm back in the thick of things as IT Manager of a small company and having been specifically in a Security role for a year, my first action is to push out Group Policy changes to make sure Updates are installed and more importantly - Flash is up to date.

I've discovered Windows 7 x64 on Win2k3 domains have major issues.

First they won't join the domain in the normal manner. Then I found one that was previously setup (I have no idea how), that won't run gpupdate /force. When you do, all you get is:

The processing of Group Policy failed. Windows could not resolve the computer name. This could be caused by one of more of the following:a) Name Resolution failure on the current domain controller.b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).Computer Policy update has completed successfully.To diagnose the failure, review the event log or invoke gpmc.msc to access information about Group Policy results.

Wednesday, July 27, 2011

This is mostly a reminder for myself, when installing ioncube from FreeBSD ports an ioncube.ini loader is created (or exists) in /usr/local/etc/php

If you have xcache installed, there is also a /usr/local/etc/php/xcache.ini

When firing up php (php -v / php -m), you will receive an error:

PHP Fatal error: [ionCube Loader] The Loader must appear as the first entry in the php.ini file in Unknown on line 0

The reason is due to the defaults in xcache.ini. Xcache is loaded as a full module instead of a Zend extension. Change the xcache.ini file to load it as a Zend extension, and prepend the load lines with the ioncube lines like so:

Wednesday, January 26, 2011

I can't let this die .. It's a solution with such high potential. It seems with the Oracle purchase of Sun, the Sun links are all dead. So I'm reposting the original page from the Wayback Machine for reference..

A question was recently posted in zfs-discuss@opensolaris.org on the subject of AVS replication vs ZFS send receive for odd sized volume pairs, and does the use of AVS make it all seamless? Yes, the use of Availability Suite makes it all seamless, but only after AVS is initially configured.

Unlike ZFS, which was designed and developed to be very easy to configure, Availability Suite requires explicit and somewhat overly detailed configuration information to be setup, and setup correctly for it to work seamlessly.

Recently I worked with one of Sun's customers involving the configuration of two Sun Fire x4500 servers, a remarkably performing system, being a four-way x64 server, with the highest storage density available, being 24TB in 4U of rack space. The customer's desired configuration was simple, two servers, in an active - active, high availability configuration, deployed 2000 km apart, with each system acting as the disaster recovery system for the other. Replication needed to be CDP, Continuous Data Protection, offering 24/7 by 365, in both directions, and once setup correctly, CDP would work seamlessly, and be a lights out operation.

Each x4500, or Thumper, comes with 48 disks, two of which will be used as the SVM mirrored system disk, (can't have a single point of failure), leaving 46 data disks. Since each system's configuration will be the disaster recovery system for the other site, this leaves 23 disks available on each system as data disks. The decision as to what type of ZFS provided redundancy, the number of volumes in each pool, if compression or encryption is enabled, is not a concern to Availability Suite, since whatever vdevs are configured, the ZFS volume and file metadata will get replicated too.

For testing out this replicated ZFS on AVS scenario in on my Thumper, here are the steps followed:

1). Take one of the 46 disks that will eventually be placed in the ZFS storage pool. Use the ZFS zpool utility to correctly format this disk, and action which will create a EFI labeled disk, with all available blocks in slice 0. Then delete the pool.

# zpool create -f temp c4t2d0; zpool destroy temp

2). Next run the AVS 'dsbitmap' utility to determine the size of an SNDR bitmap to replicate this disk's slice 0, saving the results for later use.

5). Use the 'find' utility below, adjusting its first parameter to produce the list of volumes that will be placed into the ZFS storage pool. Carefully examine this list, and adjust the first search parameter and/or use 'egrep -v "disk|disk"', for one or disks to exclude from this list any volumes that are not to be part of this ZFS storage pool configuration.

This resulting list produced by "find ...", is key in reformatting all of the LUNs that will be part of a replicated ZFS storage pool.

9). Now mirror metadevice d101 and d102, into mirror d100, ignoring the WARNING that both sides of the mirror will not be the same. When the bitmap volumes are createD, they will be initialized, at which time both sides of the mirror will be equal.

# metainit d100 -m d101 d102

10). Now from the mirror SVM storage pool, allocate bitmap volumes out of SVM soft paritions for each SNDR replica

About Me

IT Guru. Been doing this stuff for 20 years - everything from desktop support to server builds to network/Internet anything and PCI Compliance/InfoSec.
Built and operated VFEmail.net from the ground up in 2001 until today.
http://www.linkedin.com/in/rickromero