October 2014

CODE

April 28, 2011

Jon has recently been examining a couple of open-source tools for generating XML documents we thought we’d mention here. So far he’s barely scratched the surface of what they can do, but so far he's impressed. The tools in question are powerExt and the XMLi Toolkit.

Some of you may remember that way back in 2007 we wrote an article describing an approach that used CGIDEV2 ("Using CGIDEV2 for Generating XML"). That approach has stood us in good stead for many years, but there were times (e.g., when there were large numbers of optional elements and attributes) when we felt a different approach may be useful. These tools offer alternative methods of building XML that deal with those situations. The raison d'être behind these tools is very different. The XMLi Toolkit's author, Larry Ducie, developed his tooling to simplify the process of generating XML within RPG programs. powerExt's XML generation capabilities, on the other hand, are just one part of a toolset designed for producing dynamic modern RPG-based Web applications. We briefly mentioned this Web tooling in our recent Extra article on RPG UI modernization "Still Dreaming."

For basic XML creation, the approach taken by both tools is similar. API calls are used to add elements to the document and these APIs ensure the correct formatting of the XML, including making appropriate substitutions for characters such as "<" and ">.” Both build the XML document in memory and provide APIs that allow the resulting document to be written to the IFS, etc.

The following (partial) code examples both generate the same very simple XML document and write it to an IFS file.

As you can see, they’re very similar. But while both tools offer similar XML generation capabilities, XMLi goes one step further and offers a complete templating system that has offers some very impressive capabilities. It’s also designed as a plug-in style architecture and is therefore extensible to your specific needs.

XMLi Templating

The templating system is unbelievably powerful and allows for complete separation of the document layout from the RPG code that creates it. You'll have to download Larry's code samples to get a real idea of just how powerful this capability is. We'll keep our example simple and just attempt to match the creation of the XML document generated by the earlier code samples. Here's the template used:

Those of you familiar with xslt transforms will find this vaguely familiar. Note in particular the special values "parm[1]" and "parm[2.” These are supplied by the RPG code at run time. Here’s the RPG code:

The interesting thing here is that, as with our CGIDEV2 approach, the template can be changed without changing the RPG code. Only if you want to add additional variables (parameters) do you need to change the RPG code; and even then it’s simply a matter of adding additional xmli_setParm() function calls. The really interesting thing is that templates can, among other things, include SQL to retrieve the data making for a truly dynamic environment. In fact, there’s even a command to allow templates to be executed without requiring any RPG code.

There's a lot to like with both of these tools and we're looking forward to spending more time working with them. In both cases the documentation is a little "thin" right now, but growing by the day and the example programs are very well documented and tell you most of what you need to know.

Thanks to Larry Ducie and Henrik Rützou for two excellent tools that can only help the IBM i community move forward.

Before we go ...Just one last thing on the subject of community. IBM has added an IBM i-specific area to the DeveloperWorks website. There are some good links available and we're hopeful of seeing more good content in the future. Unlike the RPG Café, you don't need to sign in and so far the performance is pretty snappy--a welcome change. Check it out here.

March 01, 2011

In our lives as teachers and consultants, we get a chance to see many samples of RPG code. Fortunately, much of the code we see is in what we consider to be relatively good condition, especially when you consider the age of many of the programs we’re working with. Perhaps this is because most of the clients we work with come to us because they’re aware of the importance of modernization. Then again, from time to time, we see some coding techniques that we’d hoped had been relegated to the distant past.

As we’ve said before, one of the strengths of our beloved IBM i is that even the most current releases are capable of running code that was written decades ago. That’s a powerful statement of investment protection and upward compatibility that’s unparalleled in the industry. However, that very feature also could be seen as a major problem for the platform because way too many shops seem to take the position that “if it’s not broke, don’t fix it.” Not a bad idea, but perhaps we should define “broke.”

So the code still manages to produce invoices or payroll checks. But how easily can you update it or add new features to it? What if a new (young) hire were given the task of enhancing it--would they be able to understand it easily? What if some of your users are mobile and need to access the application without a traditional 5250 emulation screen? Given those considerations, is your code broken or not? If the application can’t keep up with the demands of the business, we maintain that it’s broken.

Of course, modernization doesn’t happen over night and many shops are making the investment to modernize their code base or, at the very least, to adopt modern coding practices with any new code they write.

The title of this blog, however, comes from the saddest situation that we’ve recently seen. These are programs that were all written not in some distant past, but in the last two to seven years--and they still use techniques we thought were considered old-fashioned even before RPG IV came out in 1995! (Did you realize it was that long ago? RPG IV will be sweet 16 this November! Perhaps we should hold a party?)

We’ve recently worked with programs written within the past two to seven years that still make frequent use of the dreaded CAB operation and use more left-hand conditional indicators than you can shake a stick at--these were often used along with a COMP operation to set other indicators that are used to ... condition more code, of course. Some of the techniques used were so alien to us we actually had to write test code to convince ourselves of what would actually happen when, for example, an END statement were conditioned by an indicator and that indicator was off. And let's not even mention the dreaded GOTO for fear of really starting a holy war.

But finding this code wasn’t the worst part. We were working with another consultant on the same set of code. His assessment was that it wasn’t really all that bad compared to code at other shops he’s worked in. That’s when we realized how sheltered our coding lives must have been! Either that, or he has been working in come really, really bad RPG shops!

Perhaps because both of us were so closely involved with the development of the new RPG IV compiler and its more advanced features, we’re more bothered by those programmers who don’t use them! Or maybe we’ve been lucky enough to be working with more modern coding styles and have wiped from our memory banks the old techniques.

Fortunately, as we noted earlier, most of our clients come to us because they recognize the value of modernization and so their code--especially the stuff written in the last decade or so--has far fewer of these techniques that we thought--or perhaps just hoped--had deservedly gone the way of the dodo.

How about you? What's the "best" worst example of recently written code you have tripped over? There’s a shirt from the RPG & DB2 Summit in it for the example we find most appealing--oh, sorry, that should be appalling.

July 13, 2010

Although we’ve had the details of Rational Open Access for RPG (OAR) for awhile now, we hadn’t had the time to explore it in the depth that we’d like. That changed recently and we’ve just wrapped up an article for the upcoming issue of IBM EXTRA that walks you through the creation of a simple OAR handler. It launches July 21. To subscribe to IBM EXTRA, go here.

All of the focus with OAR has been on the Web-GUI front, but as we've noted before, we think that it has great potential in other areas. As a result, for our first effort we chose to write a handler to generate IFS files. Our example doesn't do anything that couldn't have been done by an RPG program directly called the IFS APIs, or indeed that generated a work file and then invoked CPYTOSTMF to create the IFS file. But there's an elegant simplicity in being able to write directly to the IFS by adding a keyword to the F-spec for the file and using your normal write operations.

We were pleasantly surprised to find how easy it was to code the handler once we’d figured out the basic mechanics. The fact that the data is already in human-readable form was a big plus as it avoids the necessity of decoding packed decimal and binary fields. For many, the biggest challenge will be the fact that almost everything (and we mean everything) involves the use of basing pointers. OAR passes you a single parameter and the vast majority of the items in that structure are the basing pointers to the various fields and structures that you need to access. Luckily, you don't need to do any pointer manipulation, but you should be completely comfortable with the use of based structures - if you know that the rest is a breeze.

The only other thing that you should be familiar with is the use of nested-data structures and qualified data names, but then you all know about those already, right? Of course, we could’ve made the handler more complex. One obvious addition would be to optionally output an initial record containing the field names as column headings. Another would be to allow the user program to specify things such as the field delimiters (e.g., tabs instead of commas, single quotes instead of double, etc.), end-of-record characters and the code page for the IFS file. Implementing these isn’t rocket science, but it does hint at the level of complexity that’d be required should one decide to write their own 5250 handler!

We’ll be working on other handlers soon and hope to see others publish their own efforts. Much can be done, but the code won't write itself!

If you have any suggestions for handlers that you think would be useful, please let us know via the comments. We'd rather spend time on routines that will be useful to others than just play around for the fun of it.

As it happens, Susan will be teaching OAR at the Ocean User Group conference on Friday of this week. If you live in the area and haven't already booked for the event, come and join us! If you're already registered we look forward to seeing you there.

June 15, 2010

For our EXTRA article this week, we wrote about the new DDS Designer--the one integrated with RSE in its latest incarnation as part of RDP or RD Power or ….

In that piece, we couldn't help drawing parallels between the new DDS Designer and the CODE Designer. We're extending that discussion here and inviting your participation.

Neither of us are big fans of SDA and we pretty much considered RLU hopeless. CODE Designer, while far from perfect, has been our DDS designer of choice for many years. When the integrated Screen Designer came out as a Technology Preview, we spent some time with it. But since it wasn't recommended for production work, we didn't teach it in any detail in our private RSE classes

Now that the new designer is officially supported, we're beginning to teach it, and in doing so, running into the inevitable comparisons with what came before--for us and for many of our students who are/were CODE Designer users as well. Which do we like better? That's not an easy answer. There are some really nice new features in the new one, but at the same time, we miss some of CODE's features.

First of all, the new designer is space hungry; it seems to work best with a large monitor. Since we spend so much time working on a laptop screen, this gets frustrating. The new designer makes extensive use of an expanded Properties view, much like the properties notebook we used in CODE. We could make the properties view a fast view or detach it, which would feel more like opening the CODE properties dialog. The difference is CODE had many of the most-used properties integrated at the top of the design screen (i.e., name, data type, length, row/column, color, display attributes, etc.) so we didn't need to use the properties dialog as often. With the new designer, the properties view is the most obvious place to change these things, so after dropping a named field onto the design screen, we want to use properties to change the field's name, data type, etc. and we need to use multiple tabs on the properties view to get to things like display attributes and color. This isn’t a big deal if you have a large monitor available, but it’s an irritant compared to the ease of adding new fields with CODE.

On the other hand, one option for changing those things that’s quite feasible with the new designer is to click on the source tab and directly edit the DDS source. The changes are immediately reflected on the properties view and the outline view and, best of all, you can quickly flip back over to the design view to do additional work. The ease and speed with which we can flip back and forth between graphical design mode and raw DDS-edit mode is miles ahead of CODE. Yes, you could do it in CODE, but it was slow and painful, so we rarely did it.

There are a few irritants in the new designer, such as the inability to key static text on the screen directly. You must first select an item from the palette and then change its contents. This isn't horrible, but seems unnecessarily cumbersome compared to what we did with CODE. Similarly, to lasso multiple items on the screen, you must first select the marquee tool from the palette. Why? This is particularly frustrating when you lassoed them to move them and you must manually switch back to the other tool from the palette to move the lassoed items. Not show-stoppers, by any means, but irritating to those of us who used CODE.

On the other hand (again), with the new designer (in its latest incarnation, not in some of the earlier technology preview versions) you may use the graphical designer to design a single record format. You’re not forced to put it into a group (CODE term) or a screen (new designer term). The concept of a group/screen is great for those occasions when you need it, but much of the time, working with record formats is all that’s necessary. You do seem to be required to create a screen if you want to use the preview faculty, for some reason, which is also where the indicator sets and sample values for screen and report fields can be used.

The use of the field table to drop database reference fields onto a screen is great. Yes, we could do a similar function in CODE, but this one seems a little simpler and makes good use of an existing RSE feature that shows off the integration of the tool set, as does the use of the outline view for navigation, which replaces the "DDS tree view" of the CODE designer.

Other niceties in the new designer include the sliders to adjust the screen-image size and the visibility of the grid lines on the design screen. One of our other favorite features we ran across by chance is the capability to peek at the source code for the selected item on a screen (including an entire record format) by hovering over the name of the item just above the design screen image. The source for the item pops up in a box over the design screen--a very cool feature.

Of course, "undo" is more useful than making and reverting to checkpoints in CODE. But we find we do miss the capability to distribute a group of fields evenly across a screen.

We could go on with the comparisons, but perhaps we've said enough for one blog post. All in all, we like the new designer, even more than the CODE designer. The very notion that it’s truly integrated with RSE makes a difference and some of the new features are nice. We're hopeful the new one may, over time, re-learn a few of the tricks its ancestor had as time goes on.

We're anticipating comments along the lines of "why are we even devoting time and energy to a DDS screen designer when the world has moved on to browsers and other non-green-screen user interfaces?" Our reaction to that is some have moved away from green screens at a slower pace than others and why shouldn't maintenance of those applications be as productive as it can be? Maybe they can use the time saved to work on modernization efforts!

If you’ve experienced both designers, help us continue the discussion via the comment section. If you haven't used the new designer yet, take a look at EXTRA when it comes out this week.

IBM Systems Magazine is a trademark of International Business Machines Corporation. The editorial content of IBM Systems Magazine is placed on this website by MSP TechMedia under license from International Business Machines Corporation.