Monday, February 20, 2012

In a converged world of EMC, who purchased SMARTS and VMWare, bundling various vendors into a single Ionix umbrella - functionality is slowly being hidden and removed, making managed services and enterprise management more difficult from a standards perspective. The ESM / EISM or Server Monitoring product is the latest product to start being dumbed-down by EMC.

The History:

With ESXi being a product of VMWare and VMWare being owned by EMC, the combined company offers a different management solution called VirtualCenter, which is highly proprietary. VirtualCenter is not an Managed Services grade product, able to run under multiple operating system platforms. The EISM or ESM product has traditionally been cross-platform, enabling Managed Service providers to manage servers, hypervisors, and applications processes all from a highly scalable central platform.

The Problem:

EMC is starting the process of crippling managed services products in their portfolio, so enterprise products can be emphasized through it's VMWare subsidiary, and additional tools (which were formerly not required for monitoring) would be a required purchased product.

Friday, February 17, 2012

The standard management protocol for managing systems is Simple Network Management Protocol. Enterprise and Managed Services vendors must support SNMP to be considered a player in the data center. VMWare ESXi offers SNMP capabilities, but tools such as EMC Ionix ESM requires a field certification in order to manage the basic capabilities.

Thursday, February 16, 2012

Every try to shut down EMC Ionix (formerly Voyence) NCM (Network Configuration Manager) related tcp port services, by disabling /etc/init.d scripts, to find that there are still sockets being listened to?

The Problem

It was noted, on an NCM or Voyence platform, that a required port was still being listened to.

Tuesday, February 14, 2012

Abstract:When working in a clustering environment, it is often desirable to securely move data between platforms, or even forward individual application displays securely. The SSH protocol allows for such movement, but automatic login is a requirement for automation and scripting. This can be accomplished via pre-exchanged keys.

SSH Forwarding:To set up SSH application TCP port forwarding, view the following "Solaris 10: SSH and Forwarding HTTP" document.SSH Auto-Login:Several steps need to be followed to create the local public key and transfer it to the remote host:

Decide which remote host will receive the "ssh" connections:sun9999/user$ Host="sun1234"

Test the connection to the remote host, no password prompting should occursun9999/user$ ssh ${Host} 'uname -n'sun1234

SSH: Auto-Login Debugging:If password prompting is still occurring after the previous steps, one can use the "ssh -v" option in the test phase of step 6 above, in order to provide additional debugging verbosity.

A common error might be:

Failed to acquire GSS-API credentials for any mechanismsIf the keys are properly created and login is still prompted for, ensure the remote host has "700" permissions on the ".ssh" directory and "755" permissions on the $HOME.

Password prompting for rootBy default, "ssh" will not work as the "root" user. Of course, this creates a problem when trying to forward ports which are below 1024 (i.e. http port tcp/80.) To correct:$Host/root#vi /etc/ssh/sshd_configPermitRootLogin yes$Host/root# svcadm restart ssh

Thoughts on Security:Simple connectivity in a cluster can be done with the "r" tools ("rsh", "rcp", "rlogin".) Passwords are passed in the clear, when a user types them, at a prompt. Most critics advocate SSH as a more secure solution for clustering.

The "r" tools can also be set up for auto-login, in a clustered environment. This can be a reasonable alternative to the heavier "ssh" protocol, which burns CPU cycles on mandated end-to-end encryption, if data being passed is of little consequence.

Thoughts on Today's Date :This article was published on "Saint Valentines Day" - Happy Saint Valentine's Day to you!

Monday, February 13, 2012

Vonage and MSN Port Usage

Abstract:

Adding Voice over IP (VoIP) and Instant Messaging to a home is normally a simple process. The goal is often to increase communication while reducing telecommunications bills. Occasionally, there are problems with access, which required troubleshooting or more advanced features are desired. A user may need to understand the protocols, in order to better maintain security, and limit scope to attacks by viruses and worms.

Vonage Voice Adapters

Vonage is a low-cost VoIP phone provider service. Normally, not much needs to be done, except plug in a device. Here are the protocols which are required.

Service

TCP

UDP

Notes

DNS

53

Name Resolution

TFTP

21,69,2400

Firmware Upgrade

HTTP

80

Configuration

SIP

5061

pre-2005 Vonage devices

RTP

10000-20000

RTP (Voice) traffic

When a call is made, a random port between 10000 and 20000 is used for RTP (Voice) traffic. If any of these ports are blocked, you may experience one way or no audio.

Microsoft MSN and Windows Messenger

Microsoft provides various tools like MSN and Windows Messenger, but in order to get full functionality, occasionally users must forward ports through firewalls and expand exposure to worms and viruses. Use very carefully.

Service

TCP

UDP

Notes

Windows Messenger - voice

2001 - 21206801, 6901

Computer to Phone

MSN Messenger - file transfers

6891 - 6900

Allows up to 10 simultaneous transfers

MSN Messenger - voice

6901

6901

Voice communications computer to computer.

MSN Messenger text

1863

Instant text messages

The ports may be helpful when you want to limit vulnerabilities within your environment to unfriendly viruses and worms.

Monday, February 6, 2012

The SPARC Road Map has been experiencing updates at a tremendously accelerated pace over the past few years, with new SPARC releases either happening early, with higher performance, or with a combination of the two. It is quite exciting to see SPARC back in the processor game again!

Solaris 11 Launch: SPARC Road MapDuring the Solaris 11 Launch in November 2011, the following was the SPARC road map, reminding the market of the 8 core T4 processor delivery, with the same performance as the former 16 core processor, and enhanced single-threaded performance.

It was also hinted that the SPARC T5 was ahead of schedule at Oracle - shipping in 2012.

Now, it is February 2012, and the SPARC road map has officially adjusted (although the exact date of when it occurred is unknown, since there was no official announcement.)

SPARC Road Map AnalysisNote the accelerated changes in the SPARC road map over the past few months:

The 8 socket T5? processor will perform well enough to replace the M series 8 socket platform in 2012 and be competitive to reach up tothe 16 socket M series.

The next 8 socket T5+? processor will perform well enough to replace the M series 8 socket platform in 2013 and be competitive to reach up to the 16 socket M series.

Unified SPARC - T4 ReleaseIt should be noted that Oracle released a T/M unified processor socket called the "SPARC T4" in Q4 2011, which performed as well (or better, depending on the metrics) as the "SPARC T3" (with 128 threads per socket) which was released in 2010, but T4 halved the cores, doubled the speed (or better) of a T3 thread (with 64 threads per socket) and added a new option where thread speed could be 6x faster (with 8 threads per socket.)

Extrapolations and RemembranceThe M-Series was out-of-range for many smaller service providers, while the lower-end T series offered the price-performance to be competitive with only mid-range systems, where platform throughput mattered. The recently released T4 offered more competitive single-threaded speed, to eat away at lower-end open-systems market share. The next generation T-Series, expected later this year, will eat into the market share of more expensive higher-end open-systems market with lower-cost higher-socket counts.

Oracle already hinted that the T5 will have some of the features of the former RK or Rock processor (memory versioning looks like relational memory interface.) The addition of hardware compression, columnar database acceleration, Oracle number accelerations, and low latency clustering (at the socket level) will make it a great Oracle RDBMS & Oracle MySQL database accelerator and an outstanding Oracle RDBMS database accelerator - placing SPARC years ahead of POWER and proprietary x86. The competitive benefit to Network Management systems with large embedded databases (i.e. performance management) will be immense.

This is not the first time that adding accelerators gave SPARC a massive boost - the addition of crypto cores inside the T processors made it the fastest single socket HTTPS server on the market for years and the highest performing contender for scalable encrypted polling engines (for managed service provider class network management vendors.) Non-competitive network management service providers avoided the encryption discussion with SSH and SNMPV3 because they could not "keep up" while competitive software providers out-shined their competition on SPARC. With the recent release of Intel's crypto instructions, that benefit is waning for brand new network management service providers. The compression algorithms in conjunction with database accelerators will have come "just-in-time".

Clearly, the investment in the S3 core provided Oracle with the breathing space it needed, to unify the M and T series, with the lower-end SPARC T4 platforms. With the soon-to-be-released SPARC T5 platforms, Oracle will continue to consume the low-hanging-fruit in the M series (in addition to the AIX and HPUX) space with a high-performing SPARC core which scales to greater socket counts.

Final ThoughtsIt appears pretty clear that "NetMgt.BlogSpot.COM" was the first to break the roadmap update news. Continue to manage your networks with obsession and security!

Friday, February 3, 2012

Abstract:File systems have existed nearly as long as computing systems. First, systems used storage based upon tape solution with serial access. Next came random block file access. Various filesystems were created, offering different capabilities, and eventually allowed a disk drive to be divided up into multiple logical slices. Volume managers arrived later on the scene, to aggregate disks below individual filesystems, to make larger capacities. ZFS was created by Sun Microsystems, for the purpose of erasing the distinction of volume manager and file system - to add flexibility that the divided pair could not easily achieve. Apple computers often have the need for massive data storage, but the native filesystem has been lacking - until ZFS became a possibility.

History:Apple computers are the traditional work horse for graphic design houses. They work with large media such as billboards and books with high resolution photographs... which all take a lot of space. As computers continued to advance, they knew they needed a real filesystem.

In 2007, Apple was originally intending on packaging ZFS into their MacOSX operating system and shipping it with Leopard. This would have fixed a lot of problems experienced in the Macintosh environment, including the long time it takes to re-silver an mirrored set if someone kicked a power cable on a desktop USB drive, and virtually unlimited expansion of a filesystem by merely adding disks.

Along came 2009, Apple dumped ZFS. There was an outcry in the community, looking for a real filesystem under MacOSX, but Apple started looking for a new team to "roll their own" filesystem.

In 2011, Apple still could not develop a modern filesystem, and some of the old people who were porting ZFS to MacOSX decided to form their own startup - with the purpose of porting ZFS to MacOSX.

With the creation of ZFS, Apple MacOSX has finally made it into the realm of being a very viable platform for server applications. No longer will people need to use MacOSX as a client and buy a SPARC or Intel Solaris platform as a server to gain the benefits of ZFS. Common designers, video publishers, and media collectors can now just add the occasional multi-terabyte hard drive and just keep on building their data collection with limited concern for failure - it will all be protected with parity and old deletions can be easily rolled back.

With the addition of ZFS to MacOSX - expect to see more MacOSX platforms in the small enterprises. The benefits of Solaris with the simplicity of MacOSX will surely be an awesome win for the computer community - which means Network Managers will need to take this into their consideration as they roll out management platforms.