RDz includes a visible subset of CICS Explorer functionality. I say a visible subset, because it actually includes ALL of CICS Explorer (and the 2 work great together). In that RDz is geared towards developers, a decision was made to expose (by default) a subset of CICS Explorer that would be of greatest interest to Developers. If, however, you are doing POC work, on a test system, or have some other scenario where you need the power of RDz AND the full power of CICS Explorer, this is simple to accomplish.

By default, your CICS/SM perspective will NOT include a MenuEntry for Definitions. To show this entry is quite simple.

As a frequent "power user" of the WebSphere Liberty Profile (wlp), I do much of my research using wlp and am a huge fan of it. In doing some Density research on Linux on Z (a great platform of wlp, greater still if you are accessing data on Linux on Z or z/OS), I found an anomoly. While it's great for a new release to outPerform prior releases (and wlp is quite good at doing this), the differences I found were frankly too good to be true (and as my Mother taught me ... if everything is coming your way ... you may be in the wrong lane). In some research I was doing, I was seeing that wlp 8.5.5 was having Transactions per second (TPS) about 2.5x higher than 8.5.0.1 while burning only about 75% more CPU. I installed the 8.5.5 version on the same Linux on Z guest, ran the same application, and accessed the same back end data base on DB2 for z/OS. The difference was still there. A little research (javaCores at various times during some performance runs), highlighted the problem. On the same 4-core guest, wlp 8.5.5 was running about 25 worker threads (since the DB2 was nonTrivial JPA ... most of these threads were waiting for response from DB2). In contrast, wlp 8.5.0.1 had only 8 worker threads running. When I forced wlp 8.5.0.1 to have 24+ worker threads, the story was much more realistic. (continued in comment). I was pleased that wlp 8.5.5 still does show marked
improvement over 8.5.0.1 (12%), it is an improvement that I can attribute to optimizations made between the releases. So if you are trying to push a great deal of workload through wlp 8.5.0.x (especially if some requests involve external processing such as service calls, DB2 calls, etc.) ... it is good to assess the utilization of threads in your wlp jvm. If you see few or no threads parked (waiting for work), then you may want to add a line similar to what I did to the server.xml:

<executor name="Default Executor" maxThreads="48" coreThreads="24"/>

This one change sent my Transactions per second (TPS) from 2225 to 5037. Not only was TPS greatly enhanced, but the amount of core per trx per second was reduced. So if you are using wlp 8.5.0.x and wlp needs to achieve higher utilization and/or higher TPS ... a single line added to server.xml (and any addition to server.xml should be done with care) can make all the difference.

I'll start with a caveat, I have now spec'd out and sort of designed what will be done here, but have not done it. There will be 2 follow-up posts ... one when I have done the 30% of the work that gets me to the 90% usable point, and one when I have done the other 70% of the work that gets me to the 100% point. So I am in the process of converting from my Windows 7 desktop (manager made me come over from Linux when I joined this department because MS Office and Open Office do not play well together (in spite of best efforts, if he sends me a powerpoint and I use OO to edit it, it goes back to him with the formatting all hosed ... but I digress)) to the IBM Client For EBusiness (C4EB) desktop with RedHat Enterprise Linux 6.4 (RHEL 6.4) as the host, and KVM as the hypervisor.

Let me step back for a moment and explain my functional requirements, and how I made this decision. I use Cloud software all the time, but realized I was not well versed in the underlying technology (virtualization). In addition, I am looking to dabble in Mobile by taking my areas "Friendly Bank" application and putting a mobile front end on it. Having created a jax-rs based version and a JDBC version ... I know the app well enough that I figure the coding should not be too tough. Most of our resources (servers et al ... Z and distributed) are behind a firewall within the IBM network (ie: 2 layers of VPN). Frankly, I'm sure that eventually I could figure out how to get my iPhone (or my wife's iPad) thru all the VPN gorp ... but I really did not want to. So I wanted to consider a virtualization solution that would allow me to learn more about the core virtualization and allow me to start dabbling with Mobile w/out fighting the VPN monsters from my iPhone. Additionally, I wanted to look at automatic (cloud-like) deployments using hypervisor APIs et al (which look a little less common than I thought). One last tidbit, this is all about server virtualization ... desktop virtualization is a whole different animal ... although I've been trying to learn the key concepts of both. So I felt I had 3 good options:

Stay with my Windows 7 desktop and use the free VMWare player as a type 2 (not bare metal) hypervisor

Go with the IBM C4EB with RHEL 6.4 as my desktop and KVM as the hypervisor

Considerations:

So, you might ask, what were the considerations for the 3 options (pros/cons/potpourri)

Windows 7 and 2 "worthy" type 1 (bare metal) hypervisors

This would be an easy one relatively speaking. I'd use RHEL and KVM on the hypervisors as, for home education/development purposes, the software was inexpensive or free. The hardware was not too bad. I could get a 3GHz+ quad core with 32Gb of memory and a 2TB hard drive with about 128Mb of flash for about $1600. I would get 2 of them and split hardware components. While I'd start out with all software that was free (in this development type environment), I'd split it up so that middleware went to one HV and data base went to another. Basically, anything that would normally be licensed would be on one machine and not both (to minimize cost ... larger model would just say each component would select which/how many HVs it would run on). I would want a gigabit switch between them for "east-west" traffic, but those are under $100 for a home version. So this was looking promising ... then my miniVan died and I had to buy a new car ... suddenly, that $3400 or so started to look much steeper. I may find myself here eventually, but since the first phase is really education and playing with networking, DASD sharing, et al ... a 1-box solution is a good start anyway. I don't need golf clubs as good as Tiger Woods' clubs on my first golf lesson (actually I'm way past my first lesson and I barely qualify for beginner's clubs ... but I digress). So this may happen eventually, but for now, budget constrains me.

Windows 7 and the type 2 VMWare Player hypervisor

To make this happen, I needed to bump my Lenovo W510 to 16Gb of memory, fortunately the hardware guys are great guys and they helped me make this happen. As a geek, the thought of getting back to using Linux as my base and having a Windows guest for Windows only software (MS Office primarily) was more appealing, but I've become less religious about IT in my old age. With the Windows desktop and all of the software there already, using the VMWare Player did seem attractive. I was a little concerned about a Type 2 HV, but I figured it would support all of the VMWare APIs and, let's face it, VMWare is still the 800 lb gorilla in the virtualization space on distributed (cloud may be another story). As I started to do some research down this road ... an interesting article came about IBM repeating what they did about 23 years ago, and investing another $1 billion in Linux, including lots of expansion into the Power boxes including KVM on the power boxes. So, as IBM is my employer, and the writing on the wall says Linux skills will grow even more in popularity, and KVM in particular is likely positioned for growth in the hypervisor space (and also I was looking to get back to my Linux roots anyway) ... I decided on the third option.

RHEL 6.4 desktop including KVM as a bare metal (I think) hypervisor

Some of the key pros to this approach were:

Bare-metal(ish) hypervisor since KVM is in the kernel

Relatively low space needs and, since most of my guests will be Linux, lots of options there

Did I mention I was anxious to get back to Linux as my base (w/out being dis-respectful of my manager's wishes)

I am delaying the actual start here until after an upcoming business trip (don't want to be working out kinks in my new workstation setup when I'm trying to present to customers). As I understand it, guest Linux systems can share the base disk but ... I'm hoping to not rely on that and to instead try some of the very cool stuff that z/VM does with its Linux on Z guests. z/VM creates numerous miniDisks that are ReadOnly (RO) to the Linux guests. Then the guests can mount directories for logs, configs, or anything else that would need to be unique and writable) within (or outside) of these RO file systems. So, technically, thousands of guests can share the same JVM, basic WebSphere binaries et al. This saves HUGE amounts of space and can have positive maintenance impacts as well.

Random Design thoughts that need to be firmed up

Goal was density and efficiency. This means that I wanted one or 2 "base" images (now how I create them might be another story) and then some postHyperVisor (HV) work to put selected components on the VM. For example, I may have a RHEL or SUSe basic OS along with a JVM (OSGi?) which I would try to get started on image start. This JVM (Tomcat or Liberty?) will be the new "consciousness" of the VM (agent). From there, I could "add" the components requested (sort of poor man's cloud). But with most components, there would be an option to add it with copy or reference. Reference would be using R/O core install (90+% of the space) with mountPoints for guest-specific areas (configs, logs, ...). This is commonly done in the z/VM space with Linux guests. In that I am looking to only support Linux guests, this should not be too too difficult. What I need to do is:

Create image that has

Base Linux OS

Disk allocation for R/W space (1) and additional file systems that are R/O (assuming I can do this and give same "space" to multiple guests)

Components in "shared" space. These "should" be installed images but config et al copied into R/W directory mounted inside this image on each guest (customized per guest)

TomCat

Liberty

DB2

MySQL

Some LDAP/security server (on Linux, so not using AD)

Some JVM

Items that need to be done on first startup of VM (will work recovery into the picture later) via script

Mount R/O directory for JVM and Tomcat/Liberty (whatever agent)

Copy contents of Config directory(ies) into local R/W directory, customize, and create a link (master cfg in separate R/O dir)

Items on nonInitial VM starts (startup config should hopefully take care of this

Maintenance of core components (NOT needed for initial roll-out)

Keep maintenance and master copy of all ref software. Use R/W VM (central) to test any new upgrades and eventually copy to master when ready

Identify maint window at which time central may notify agent of software update and agent can take appropriate steps

More random thoughts

Think about maintenance of images and versions
Write agent application and central application
Agent to have ping mechanism to verify it is good
Future of security for inter-VM (and intra-VM?) calls
"Central" now on its own Linux guest (Tomcat/Liberty)
Want VMs to operate if possible even if not able to communicate w/Central (apps should run, central only needed if IaaS/PaaS work needed).
After initial install, keep all that is needed in case that is needed again for recovery (future)
After initial install, generate startup Config for VM so that it does not go thru extra steps on VM startup
Mark some VMs as unrecoverable (or bad to recover) because they contain data. Possible future stash FS where it can create recovery artifacts?
Eventually look to OpenStack to see how easily this could be done cloud-ified
Component to define how to upgrade (quiesce?)
Relationship between HVs and components ... not all components on all HVs
Licensing is in future
Nanny of central in future. Possibly serviceability miniDaemon on VM to send agent logs and restart VM

So where does Phase I (near term) end

Big tasks in phase I are:

Needed day 1 or as soon after as possible

Back up, back up, back up, save, save, save (try to have an emergency contingency)

Install C4EB with RHEL 6.4 on machine and get very basic work stuff configured and working (3270 emulation, Lotus Notes, Sametime, ...)

Get KVM packages installed and learn as much KVM as possible as fast as possible

Get Friendly Bank application running in Liberty (on host or on a guest)

Get Friendly Bank DB2 tables set up in an external virtual server I have on the IBM network

Maybe, if not too bad, get the DB2 tables set up in another guest on this machine (this will be a bit memory constrained)

Write a second blog article about what I've learned and where I had to tweak the plan

Being of an agile mindSet, I'm going to wait until I get a whole lot closer to the current set of plans being complete before I go too much further. Future plans will be extracted from random thoughts and general design thoughts above .. but as long as I'm doing my day job OK ... where this goes is still quite flexible ... and it could continue extending until I can afford those separate boxes and get a little more heavily into some hyperVisor APIs et al. So who knows what the future holds, but I hope to have a great base of infrastructure to facilitate figuring it out as I go along. Having it all on one box will make it slow, but will also make it very convenient when I travel. So for now, here's hoping it's still 2013 (and maybe not too far from now) when I do my blog showing Phase I is ending and defining Phase II a little better. ie: hoping I put some rubber to the road.