Archive

I recently read through an article about how IBM i ISV’s are apparently wrong in the way that the charge for their products. While I do have some sympathy and do recognize that some pricing practices are not appropriate in today’s global competitive market, I also feel the argument is incomplete.

As an ISV we develop product in the hope of being able to sell it but that comes as an upfront cost before you even make the first sale. The cost of developing a product is not insignificant, so you need to make sure that the market is big enough to cover those costs and more (yes profit is key for survival).

Here are some of the arguments made within the article.

“Provide straight-forward, flat pricing with no haggling”

The poster goes on to state that it should be a single price regardless of the size of the system or the activity on that system such as active cores or number of users etc.

Well I have yet to walk into a customer and not be expected to haggle on the price! Its human nature to want to get a better deal than is originally placed in front of you, it makes you feel better when you get that better price and can take it to your manager.

Don’t play favorites? I have already blow that one out of the water above, some customers demand more just because of who they are. A large blue chip company brings with it more opportunity to up-sell other products and they tend to have professional negotiators so they tend to get the best deal! But they do tend to be happy to pay more for the right solution and because they have bigger operations the cost of the purchase is spread over a much wider base. Maybe they are not favorites but they certainly get more attention.

If I walk into a small client who only has a small 1 core system and less than 20 users what do you think he is going to say when he finds out he is paying the same price as the big guy down the road with 64 active cores and 3,000 users?? I am pretty sure he is not going to feel like he was dealt a good hand!

I do agree that the price has to be fair and I do not get involved with adding additional cost just because the system has multiple LPAR’s or more active cores, the price is set by the IBM tier group for all our products. This should reflect the capabilities of the system to handle much more activity and therefore spread the additional cost over a much larger base.

“Freeze software maintenance when a customer purchases your software”

Nice idea but totally impossible to meet! If the developers I employ would accept a pay freeze at the time I employ them and the suppliers I use (that’s everyone who adds cost to my overhead) would freeze their costs maybe I could do the same. In reality its never going to happen. There are too many influences that affect the ongoing cost of support, that cost has to be passed on to the users of the software. The users always have the option of terminating support, they can stick with what they have as long as they want. However having said all of that, we have not raised our maintenance costs to our customers for many years, we are just making a lot less money than we should..

The question about including the first year of support with the initial price is mute, add them together and they are a single price. Some companies like to allocate license fees and maintenance to separate accounts so they like to see it broken out. We don’t stop improving the product on the day we sell it, its a continuous cycle so if you need support or want to take advantage of a new feature we just added maintenance is an acceptable cost.

“Make it easy for customers to upgrade their hardware”

If a client stays within the same IBM tier they should not pay additional fees to move their application, however if they move up a tier group they should. This all comes back to the discussion above about how the initial cost should be set.
We do not charge for moving the product to a new system in the same or lower tier, we don’t add a fee for generating the new keys either, but you must be up to date on maintenance or expect to pay something even if it is just a handling fee.

IBM charges for additional core activations which we do not agree with but when you look at the capability of today’s systems and what activating an additional core can do for adding more users its not that simple anymore. What I certainly do not like with IBM’s fees are that we are billed for the extra cores PLUS we have to by additional user licenses etc as well if we add more users! That is just gouging at its best!

“Don’t make your customers pay for capabilities they don’t need”

Its easy to say modularize your application in such a manner as to allow the clients to pick and choose what they want. The reality is some options just can’t be left out because of dependencies on other options. Another problem is clients now have to make a decision as to what the are going to purchase, how many times have you bought a product with more options than you need just because the price point for the additional features is so minimal? The client is not paying for something he does not need he is paying for a product that meets his requirements and maybe more at a price that is acceptable. If the price is wrong your competitor will make the sale not you.

Purchasing decisions are not always made for the right reasons, we are human and we make decisions based on our own set of principles. Even companies which have purchasing policies in place that should reduce the effect of human emotion it will still be a part of the sale.
Trying to predict a clients choice is near to impossible even if you have a relationship with the decision maker, other factors will always come into affect. All you can do is put forward what you feel is a fair and acceptable price and be prepared to haggle. Trying to put a set of rules such as above into the process is only going to end badly!

I have always said that I did not need to learn or use ‘RPG’ on the IBM i as I always found that ‘C’ could do all that I needed. Recently I was asked by a friend to help them with some RPG code to handle Java and the clean up of the objects it created (Java would not automatically clean up objects because they were effectively created by the RPG program and this program ran constantly, so temp storage just kept growing until it blew up). Not knowing RPG or understanding how the layout worked (I jumped straight into ‘/Free’!) I found this very difficult as ‘/Free’ is not really free format(there are still some column constraints) while ‘C’ really is free format. Still after some research and a lot of head scratching I finally got some sample code working, we then built a Service program that could handle the Java clean up and built code into the existing RPG programs to call it. The solution works the clients systems are not blowing up with memory issues caused by Java objects not being cleaned up.

I though OK that’s the last time I will have to do that and was happy that I could get back to good old ‘C’ programming. Unfortunately, I came across another issue which required me to pick up the RPG manuals and code up a test application.

We have a client who was experiencing problems with an application that uses commitment control and constraints that required us to build a test which would emulate the problem on our systems. As usual the first thing I did was to write a ‘C’ based solution, I did find a Commitment control test which was written by Paul Tuohy here. This was all written using RPG so I thought I would just follow the program logic and write a ‘C’ version which seemed the easiest option. While I could get the simple file update logic built and the program would work without Commitment Control, I found that as soon as Commitment Control was started the program would freeze on receipt of data from STDIN, (I will have to ask IBM why when I have time) so I decided my bets options was to take the code that Paul had provided and build my own interpretation of the program with some additional features I needed.

I wanted the program accept multiple entries plus allow deletes by key before the commit of the data so I had to make a few changes to the logic and add a new delete option. While the program is very clunky it does achieve what I needed it to do and I found out a lot about commitment control and constraints as a result. I am also unsure if the program is as efficient as it could be, but it works and for now that is all that’s needed.

Note: the Blog does not allow RPG code indentation so the view you see is not what it was copied in as!

The database was exactly the same that Paul had defined including the cascading delete for the details file (I liked that bit) so when we delete the Header Record the matching records in the Details file are also deleted. That saved us having to chain (see I can speak RPG) the details file and remove the entries. Now we can see the problem the client was experiencing and know how to resolve it.

As usual Google was our best friend, thanks to Paul Tuohy and ITJungle for providing the sample code we based the test application on. I am now a little less resistant to RPG and may delve a little more into its capabilities and how I can use it effectively, who knows I may even become good at it?? The point I am trying to make here is that while I still do not want to use RPG, I did what I keep telling others to do, I used the best tool for the job. Using any language just because it is all you know is not always the best option, sometimes you have to jump outside of your comfort zone and try something new.

We have been trying to migrate our existing IBM i hosting IBM i partitions to a VIOS hosting IBM i, AIX,Linux configuration. As we have mentioned in previous posts there a re a lot of traps that have snagged us so far and we still have no system that we can even configure.

The biggest recommendation that we took on board was to create a Dual VIOS setup, this means we have a backup VIOS that will take over should the first VIOS partition fail. This is important because the VIOS is holding up all of the other clients and if it fails they all come tumbling down. As soon as we started to investigate this we found that we should always configure the VIOS on a separate drive to the client partitions, my question is how do we configure 2 VIOS installs (each with its own disk) that addresses the main storage to be passed out to the client partitions. We have a Raid controller which we intend to use as the storage protection for the Clients Data but we still struggle with how that can be assigned to 2 instances of the VIOS?? The documentation always seems to be looking at either LVM or MPIO to SCSI attached storage, we have all internal disk (8 SAS drives attached to the raid controller) so the technology we use to configure the drives attached to the raid controller as logical volumes which are in turn mirrored via LVM is stretching the grey matter somewhat? If in fact that is what we have to do? I did initially create a mirrored DASD pair for a single VIOS in the belief that if we had a DASD failure the mirroring would help with recovery, however The manuals clearly state that this is not a suitable option (I did create the pair and install VIOS which seemed to function correctly so not sure why they do not recommend?).

The other recommendation is to attach dual network controllers and assign them to each of the VIOS with one in standby mode which will be automatically switched over should a failure occur on the main adapter. As we only have a single adapter we have now ordered a new one from IBM(we have started the process and it has taken over 1 week so far and the order is still to be placed..) Once that adapter arrives we can then install it and move forward.

Having started down this road and having a system which is non functioning I have to question my choices. IBM has stated that the VIOS will be the preferred choice for the controlling partition for Power8 and onwards, but the information to allow small IBM i customers to implement (without being a VIOS/AIX expert) in in my view very limited or even non existent. If I simply go back to the original configuration of IBM i hosting IBM i, I may have to at some time in the future bite the bullet and take the VIOS route anyhow? Having said that, hopefully more clients would have been down this route and the information from IBM could be more meaningful? I have read many IBM redbooks/Redpapers on PowerVM and even watched a number of presentations on how to set up PowerVM, however most of these (I would say all but that may be a little over zealous) are aimed at implementing AIX and Linux partitions even though the IBM i gets a mention at times. If IBM is serious about getting IBM i people to really take the VIOS partitioning technology on board they will need to build some IBM i specific migration samples that IBM i techies can relate to. If I do in fact keep down this path I intend to show what the configuration steps are and how they relate to an IBM i system so they can be understood by the IBM i community.

We have a backup server that we can use for our business so holding out a few more days to get the hardware installed is not a major issue, we hope that by the time we have the hardware we have some answers on how the storage should be configured to allow the VIOS redundancy and make sure we have the correct technology implemented to protect the client partitions from DASD failure.

If you have any suggestions on how we should configure the storage we are all ears

We have been resistant to implement anything to do with the IBM HTTP server for a number of reasons, the main one being that we feel Linux is a better option for running any HTTP services on. However when we heard that IBM was now providing a mobile interface for the IBM i as part of the 7.2 release we felt we should take a closer look and see if it was something we could use. To our surprise we found the initial interaction very smooth and fast.

Installation was fairly simple other than the usual I don’t need to read the manuals part! We had installed 7.2 last week with the intention of reviewing the mobile access, unfortunately we did not realize that there were already Cum PTF’s and PTF Groups available. Our first try at the install stopped short when we thought Websphere was a requirement, as it turns out it can be used but is not a prerequisite. Thanks to a LinkedIn thread we saw and responded to our misconception was rectified and we set about trying to set up the product again. We followed all of the instructions (other than making sure the HTTP PTF Group was installed :-() and it just kept giving us a 403 Forbidden message for /iamobile. Took a lot of rummaging through the IFS directories to find out that when the CFGACCWEB command run it logged the fact that a lot of directories were missing (even though the message sent when it completed stated it completed successfully, maybe IBM should look at that?) so we reviewed all of the information again. It turns out the Mobile support is delivered in the PTF Group so after downloading and installing the latest CUM plus all of the PTF Groups we found the interface now works.

As I mentioned at the beginning I am surprised at just how snappy it is, we don’t have hundreds of users but our experience of the Systems Director software for IBM i made us very wary about actually using anything to do with the IBM i HTTP servers so we had no high expectations of this interface. We saw no lag at all in the page requests and the layout is very acceptable. When the time came to enter information the screen automatically zoomed into the entry fields (I like that as my eye sight is not what it used to be). We looked at a number of the screens but have not gone through every one. I really like the ability to drill down into the IFS and view a file (no edit capability) which will be very useful for viewing logs in the IFS.

Here are a few of the screen shots we took, the first set is from an iPod the second is from the iPad, we were going to try the iPhone but the iPod is the same size output so jsut stuck with testing from the iPod (yes we like Apple products, we would get off our Microsoft systems if IBM would release the much rumored RDi for the MAC). I think IBM did a good job in the page layouts and content.

iPod Display of file in IFS.

iPod display of messages

iPod SQL output

iPod sign on screen shield7

iPod 5250 session

iPod initial screen

The iPad screens.

iPad Display of messages on Shield7

iPad 5250 session, note how it is connected to another system (shield6)

iPad SQL output

iPad List of installed Licensed Programs

iPad initial page

Clicking on the images will bring up a larger one so if like me you are a bit blind you can see the content. Also take notice of the 5250 connection to the Shield6 system, Shield6 is not running the mobile access or the HTTP server so we were surprised when we could start a session to the Shield6 system using the mobile access from the Shield7 system. I definitely think this is a big improvement on anything else we have seen in terms of speed using the IBM HTTP server.

If you don’t have the Mobile support installed do it now! the fact that it is PTF’d all the way back to V6R1 is a big benefit. We will certainly be adopting this as our preferred access method from our mobile devices especially to provide support from our mobile devices while we are away from the office.

I have been getting a number of emails about Chinese companies trying to register our domain with a Chinese registrar and that we should act now to register before they can! As always, I am ignoring them as they are scam! Today I received an email from the European Domain Center asking if we would post a link to their page which explains the scam and provides a list of the offenders, so I checked it out and sure enough they have a good explanation of the scam plus a long list of the perpetrators along with emails etc.

A couple of new features have been added to the HA4i product as a result of customer requests. Auditing is one area where HA4i has always been well supported but as customers get used to the product they find areas they would like to have some adjustments. The object auditing process was one such area, the client was happy that the results of the audits were correct but asked if we could selectively determine which attributes of an object are to be audited as they have some results which while they are correct are not important to them.

The existing process was a good place to start so we decided to use this as the base but while were making changes improve the audit to bring in more attributes to be checked. We determined a totally new set of programs would be required which would include new commands and interfaces, this would allow the existing audit process to remain intact where clients have already programmed them into their schedulers and programs. The new audits would run by retrieving the list of parameters to be checked from a control file and only compare configured parameters. The results have been tested by the client and he has given us the nod to say this meets with his approval. We also added new recovery features which allow out of sync objects to be fully repaired more effectively.

Another client approached us with a totally different problem, they were having problems with errors being logged from the journal apply process due to developers saving and restoring journaled objects from the production environment into test libraries on the production system. This caused a problem because the objects are automatically journaled to the production journal when they are restored, so when the apply process finds the entry in the remote journal it tries to create the object on the target system and fails because the library does not exist. To overcome this we amended the code which supports the re-direction technology for the remote apply process (It allows journal entries for objects in one library to be applied to objects in another library) to support a new keyword *IGNORE. When the apply process finds these definitions it will automatically ignore any requests for objects in the defined library. NOTE:- The best solution would have been to move the developers off the production systems and develop more HA friendly approaches to making production data available, but in this case that was not an option.

We are always learning and adding new features into HA4i, many of them from customer requirements or suggestions. Being a small organization allows us to react very quickly to these requirements and provide our clients with a High Availability Solution that meets their needs. If you are looking for an Affordable High Availability or Disaster Recovery Solution or would like to ask about replacing an existing solution give us a call. We are always happy to look at your needs and see if HA4i will fit your solution requirements and budget.

I sometimes worry about how we perceive the Open Source products and what we as developers should expect from it. I like to keep a watch on what is happening within the IBMi/PHP eco system so I tend to watch the various forums looking at what people are doing. I had not been following the Zend forums for sometime as I was told that my opinions were not welcome, but had a bit of time to spare so took a quick look at what is going on. I came across the following post which raised a couple of questions Working with multiple occurrence data structures.

This seems to be the source of another post, Toolkit errors after update where at the bottom of the thread is a comment which had the following statement.

Frankly, I find your whole rant tiresome, but very well …

You are completely missing the point here Timo. Unlike some other toolkits, XMLSERVICE does NOT REQUIRE proprietary software “connection”, therefore you can use all manner of 1-tier (IBM i-2-IBM i) and 2-tier (any-2-IBM i) connection transports. PHP Toolkit / XMLSERVICE cannot control the behaviour of each and every connection possible, and in fact, there are 2-tier connections that don’t have any idea what a LIBL would be because this is a truly unique feature of IBM i. IN ANY EVENT, you can simply call CHGLIBL in any staeless or staefull XMLSERVICE job and change the state of the LIBL.

I read through the whole thread looking for the OP’s rant? I could not find it so I went through the previous post from the same OP which is where I found the possible source of the irritation. Basically the OP’s had mentioned that they did not have the performance issues when they ran with the Easycom Toolkit from Aura, I did not think it was said in a bad way, but simply that they saw better performance from the Aura toolkit than they were seeing with the new XMLSERVICE despite many improvements. That is not the point of this post as we have already said in many previous posts what our feelings are about the performance, instead I would like to mention a few things I would take away from this.

1. This is open source and as such if you have any problems with the way it runs you should be willing to pitch in and develop it to meet your needs. Alan and Roger have done a great job so far.
2. Don’t blame the test data, the problem is in the XMLSERVICE technology not the data or the amount generated. We all see applications we feel could be better designed and developed.
3. Comparison should be expected from clients, they took a decision based on the information they were given. Zend/IBM have said the XMLSERVICE is the way forward for PHP on IBMi we don’t but oh well! :-).
4. What is the cost of the effort so far in making the migration? Would it not have been more cost effective to stick with the original toolkit and paid for Aura to work it out?
5. Emotional responses should be avoided, I did not see any significant reason from the OP to justify the responses. But its free so don’t expect anything else.

Open source is a great option as long as you have the ability to change it to meet your requirements. Before you charge into a project which uses open source technology make sure it will meet all of your requirements before making it the standard, or be prepared to spend a lot of time adjusting the code to meet your needs. Sometimes paying for someone else to maintain and develop new features for the technology is much more cost effective than doing it yourself. Aura may seem to be forcing your hand with the original i5_toolkit functions by requiring you to pay for it, but if you look at the technology and what it offers it’s a very small price to pay for the benefits it brings. Plus you can always ask for improvements under the maintenance agreement which Aura would develop for free if they deemed it worthy.

We are still working with Aura and offer support and licensing for their products here in North America. If you need help in licensing the product or would like to know more about how we have implemented the PHP technology in our products let us know, we are very happy to help guide you to the light

I was reading a number of articles in the press this morning about the IBMi (i5,iSeries,AS/400 and the rest) and the possible install base. The articles suggests that there are around 35,000 “active” IBM customers but around 110,000 customers who are still running the system but not on any maintenance or support? The articles also suggests that this number can be doubled in terms of systems because the average customer has 2 systems.

The articles then goes on to ask why are these customers who are loyal to the platform still running old releases of the software/hardware and suggests that this could be in part be due to the fact that the system is so robust and secure they have no need to do anything with it. I think some of that has merit, but in the same breath I think the pricing practices of IBM have contributed to that position. The second hand market is still very strong and many customers are still changing up their systems to later ones without any maintenance or support from IBM, so maybe this may point to the pricing of support by IBM? I stopped hardware maintenance simply because it did not make financial sense for the size of system we run! It was better to throw out the system and get another one if a major component failed (not that they do that often).

Here is a suggestion for IBM, I have a number of older systems which I do not run. What about allowing those customers who are running on systems where the CPU(s) was pegged at a certain percentage have the ability to upgrade these old system to run the FULL CPU capabilities. I have a 515 and 520 which are limited to 20% of the CPU. The processing power of these system was a lot less than my new system yet they cost me a lot more to purchase, if IBM allowed that processor to be opened up as long as I had them on maintenance maybe I and some other customers would take up such an offer? Maybe you could even make it an annual fee so you have to keep up with the changes in the OS, maybe that would remove the “if it aint broke don’t fix it” mentality. It would also add value to paying for maintenance which customers could relate to, and it would be IBM maintenance not third party..

So you ask why would IBM do that, after all they wont get much revenue even if a large proportion took them up? Well maybe it would help those customers who are sitting in the dark ages move towards the new technology. They could stipulate a minimum requirement in terms of OS to get the new keys which would force many to look at the system they run today. Maybe it would even get those customers who see the system as being old in a new light (what other system offers the ability to get 5X the processing power just by upgrading the OS?). It will enable them to look at the newer capabilities which were not available because the CPU restriction made them too slow and cumbersome. How many customers who are putting up with multi second response times use this as a confirmation that this system is old and needs replacing? Short term IBM does not make a lot of money because the customers will only pay a small fee to get the upgrade, but those customers may then see the system in a new light and develop the system further? If you are not having to invest in something it has no value, that is the problem with the IBMi.

If you are running crippled systems that have a lot more power than IBM has released talk to your IBM representative, maybe if enough ask IBM may sit up and listen? But expect to pay something even if it is a requirement to have that system on maintenance.

Chris…

PS: I am talking about opening up those P05 systems which were crippled at a % of the CPU, today’s P05 systems have much higher CPW rates for less cost, just allowing the CPU to reach its full potential without matching the newer systems capabilities is what I am asking for. There should be plenty of other reasons to move to the latest hardware technology.

As part of the new features we are adding to the HA4i product we needed to build a test bed to make sure the LOB processing we had developed would actually work. I have to admit I was totally in the dark when it comes to LOB fields in a database! So we had a lot of reading and learning to do before we could successfully test the replication process.

The first challenge was using SQL, we have used SQL in PHP for a number of years but to be honest the complexity we got into was very minimal. For this test we needed to be able to build SQL tables and then add a number of features which would allow us to test the reproduction of the changes on one system to the other. Even now I think we have only scratched the surface of what SQL can do for you as opposed to the standard DDS files we have been creating for years!

To start off with we spent a fair amount of time trawling through the IBM manuals and redbooks looking for information on how we needed to process LOB’s. The manuals were probably the best source of information but the redbooks did give a couple of examples which we took advantage of. The next thing we needed was a sample database to work with (if we swing between catalogues, libraries, tables, files too often we are sorry!) which would give us a base to start from. Luckily IBM has a nice database they ship with the OS that we use could for this very purpose, it had most of the features we wanted to test plus a lot more we did not even know about. To build the database IBM provides a stored procedure (CALL QSYS.CREATE_SQL_SAMPLE (‘SAMPLE’)), we ran the request in Navigator for i (not sure what they call it now) using the SQL Scripts capabilities and changed the parameter to ‘CORPDATA’. This created a very nice sample database for us to play with.

We removed the QSQJRN set up as we do not like data objects to be in the same library as the journal and then created a new journal environment. We started journaling of all the files to the new journal and added a remote journal. One feature we take advantage of is the ability to start journaling against a library which ensure any new files created in the library are picked up and replicated to the target. The whole setup was then replicated on the target system and configured into HA4i.

As we were particularly interested in LOBs and did not want to make too many changes to the sample database we decided to create our own tables in the same library. The new files we created used the following SQL statements.
CREATE TABLE corpdata/testdta
(First_Col varchar(10240),
Text_Obj CLOB(10K),
Bin_Obj BLOB(20M),
Forth_Col varchar(1024),
Fifth_Col varchar(1024),
tstamp_column TIMESTAMP NOT NULL FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP)
CREATE TABLE corpdata/manuals
(Description varchar(10240),
Text_Obj CLOB(10K),
Bin_Obj BLOB(1M),
tstamp_column TIMESTAMP NOT NULL FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP)

We will discuss the tstamp_column fields later as these are important to understand from a replication perspective. We checked the target and HA4i had successfully created the new objects for us so we could now move onto adding some data into the files.

Because we have LOB fields we cannot use the UPDDTA option we have become so fond of, so we needed to create a program that would add the required data into the file. After some digging around we found that C can be used for this purpose (luckily as we are C programmers) and set about developing a simple program (yes it is very simple) to add the data to the file. Here is the SIMPLE program we came up with which is based on the samples supplied by IBM in the manuals.

INSERT INTO CORPDATA/TESTDTA
VALUES ('Another test of the insert routine into CLOB-BLOB Columns',
:txt_file,
:bin_file,
'Text in the next column',
'This is the text in the last column of the table....',
DEFAULT);

EXEC SQL COMMIT WORK;
goto finished;

badnews:

fprintf(qprint,"There seems to have been an error in the SQL?\n"
"SQLCODE = %5d\n",SQLCODE);

finished:
fclose(qprint);

exit(0);
}

The program takes 2 strings which are the paths to the CLOB and BLOB objects we want installed into the table. This program is for updating the TESTDTA table, but is only slightly different to the program required to add records to the MANUALS table. As I said it is very simple, but for our test purposes it does the job..

Once we had compiled the programs we then called the program to add the data, it doesn’t matter how many times we called it with the same data so a simple CL script in a loop allowed us to generate a number of entries at a time. The :txt_file and :bin file are references to the objects we would be writing to the tables, the manuals have a very good explanation on what these are and why they are useful.

Once we had run the program a few times we found the data had been successfully added to the file. The LOB data however, does not show up in a DSPPFM but is instead represented by *POINTER in the output as can be seen below.

We have an audit program which we ran against the table on each system to confirm the record content is the same, this came back positive so it looks like the add function works as designed!

The next requirement was to be able to update the file, this can be accomplished with SQL from the interactive SQL screens which is how we ecided to make the updates. Here is a sample of the updates used against one of the files which updates the record found at rrn 3.
UPDATE CORPDATA/MANUALS SET DESCRIPTION =
'This updates the character field in the file after reusedlt changed to *no in file open2'
WHERE RRN(manuals) = 3

Again we audited the data on each system and confirmed that the updates had been successfully replicated to the target system.

That was it, the basic tests we ran confirmed we could replicate the creation and update of the SQL tables which had LOB content. We also built a number of other tests checked that the ALTER table and add of new views etc would work but for the LOB testing this showed us that the replication tool HA4i could manage the add, update and delete of records which contained LOB data.

I have to say we had a lot of hair pulling and head scratching when it came to the actual replication process programming, especially with the limited information IBM provides. But we prevailed and the replication appears to be working just fine.

This is where I point out one company who is hoping to make everyone sit up and listen even though it is nothing to do with High Availability Solutions. Tembo Technologies of South Africa has a product which we were looking at initially to help companies modernize their databases, moving from the old DDS based file system to a new DDL based file system. Now that I have been playing with the LOB support and seen some of the other VERY neat features SQL offers above and beyond the old DDS technology I am convinced they have something everyone should be considering. Even if you just make the initial change and convert your existing DDS based files into DDL the benefits will be enormous once you start to move to the next stage of application modernization. Unless you modernize your database the application you have today will be constrained by the DDS technology. SQL programming is definitely something we will be learning more about in the future.

As always, we continue to develop new features and functionality for HA4i and its sister product JGQ4i. We hope you find the information we provide useful and take the opportunity to look at our products for your High Availability needs.

(If you are not in charge of this, please forward this to your CEO, because this is urgent. Thanks)

We are a Network Service Company which is the domain name registration center in Shanghai, China. On Feb 20, 2012, we received an application from Hantong company requested “shieldadvanced” as their internet keyword and China (CN) domain names. But after checking it, we find this name conflict with your company name or trademark. In order to deal with this matter better, it’s necessary to send email to you and confirm whether this company is your distributor or business partner in China?

So we thought this was a sincere attempt to stop a company from registering our domain with a Chinese (CN) registration. We responded with the following note suggesting that we had no affiliation with the company and that we felt they should not register the .CN domain for the company.

Edward

We do not have any partners in China so this is not a valid request from the Hantong company. We thank you for your attention in this matter and hope you can resolve the questions with that company.

Chris..

The today we received the following message.

Dear Chris,
Based on your company having no relationship with them, we have suggested they should choose another name to avoid this conflict but they insist on this name as CN domain names (.cn/.com.cn/.net.cn/.org.cn) and internet keyword on the internet. In our opinion, maybe they do the similar business as your company and register it to promote his company.
According to the domain name registration principle: Domain name and internet keyword which applied based on the international principle are opened to companies as well as individuals. Any companies or individuals have rights to register any domain name and internet keyword which are unregistered. Because your company haven’t registered this name as CN domains and internet keyword on the internet, anyone can obtain them by registration. However, in order to avoid this conflict, the trademark or original name owner has priority to make this registration in our audit period.
If your company is the original owner of this name and want to register these CN domain names (.cn/.com.cn/.net.cn/.org.cn) and internet keyword to prevent anybody from using them, please inform us. We can send you an application form with price list and help your company register them.

It appears to be a scam where the Registrar is scouring the .com domains and sending this note out to hundreds if not thousand of them! I found the following link to a post about a similar request from the same person.

Just as a further push I did receive a note from a Lee Gareth email gareth@live.cn with the following content.

Dear Sirs,
We are Hantong company based in China. We will register the “shieldadvanced” as internet keyword and CN domain names .cn, .com.cn, .net.cn, .org.cn. We have handed in our application and are waiting for Mr. Edward Wang’s approval. We think this name is important for our products in Chinese market. Even though Mr. Edward Wang advises us to change another name, we will persist in this name.
Best regards
Gareth Lee

So if you are sent a note about a company trying to register your domain in China with a follow-up offer to sell it to you to protect your rights its probably going to be a scam. Not sure if anything can be done about this or even if it is legal to carry out this kind of fraudulent activity. We are going to ignore the request, if they do sell our domain to another company we can deal with that when it happens.