Category Archives: Lansa

Visual LANSA requires a significant amount of screen real estate so use the largest, widest monitor you can find. This way you can keep all five Visual LANSA window locations open at the same time, with no auto-hiding necessary, and still have room to show audit stamps.

When the time comes for your employer to replace your PC, tell the decision-maker how screen real estate is more important than CPU speed. Because it is.

LANSA has a new product they recently introduced called LongReach. Have you seen it? It is their first product that I am aware of that comes with a native iOS client. They are giving away 1000 free licenses so go grab yours. You will need an iOS device for testing but I am willing to bet that you or someone in your department has an iPhone, iPad or both.

Pro tip: Here is the correct syntax for compiling LANSA processes and functions from code. Replace RENPGMR with a job queue from your box.

Note #1: Yes, the Allow Debug / Remove Program Observability parm is YESYES, one word. For some strange reason, the built-in function requires these two parameters to be sent as a single six character parameter.

Note #2: The second half of this parm (Remove Program Observability) is incorrectly named and should really be “MAKE Program Observable”. YES = the program will be made observable. NO = the compiler will strip observability from the program. We want it to be observable.

[codesyntax lang=”objc”]

define field(#@proc) reffld(#process)
define field(#@func) reffld(#function)
def_list name(#@funclist) fields(#@func)
use builtin(compile_process) with_args(#@proc #@funclist #@proc RENPGMR QBATCH QDPOUTQ YES NO NO NO NO YESYES NO NO NO NO NO) to_get(#$RETCODE)

If you have triggers that test the value of trigger return code before you default the value of trigger return code, those triggers will crash when the I/O comes from an RDMLX function.

You Really Use Triggers?

Today, general wisdom suggests putting your business logic into a callable module so that it can be accessed from a 5250 app, a web app, and a Windows/VL app without modification. Those who created the architecture for our system back in the day believed the correct place for most business logic was in triggers. Because of that decision, triggers are now business critical in our organization.

The Switch That Does Nothing Does Something

Before considering the move to an RDMLX-enabled environment, we performed due diligence to see how well our system works. Most things work so far. However, this decision to RDMLX-enable our environments, that some people have told us “is a no-brainer because it doesn’t change anything,” is not quite so straight-forward in our situation. There ARE things that break.

But You’ve Gotta

LANSA mandates that every trigger default the trigger return code. Every time we’ve asked about this, the answer has been consistent: the examples and templates from LANSA clearly show the correct way to code the trigger return code in templates.

Andersen Software Factory

Our architecture dates back to when the Andersen Software Factory from Andersen Consulting ruled the roost. Raise your hand if you remember the Andersen Software Factory! That PC-based code generator sat on top of LANSA. Yeah, good times. Fortunately, the Software Factory was destined to sleep with the fishes once the Y2K boundary was crossed. In preparation for that glorious event, I was asked to replace it with LANSA templates. The transition went smoothly and a few of the team helped to fully deprecated those PCs… by drilling holes into the hard drives. I wish I had pictures.

That history is important because the Software Factory had a specific way of coding everything. A proper structure is a good thing and the Software Factory had some good stuff. Take conversation control, for example. There were a few places, though, where the structure was broken. Like not defaulting the value of trigger return code in triggers.

RDMLX Blows (Up)

When we began testing our ERP system in an RDMLX environment, we converted some of our RDML functions into RDMLX. We didn’t get one transaction completed because our triggers crashed. It took a couple of hours to figure out the root cause.

The IOM and the OAM are distinct objects that do not communicate with each other and that behave differently. On the iSeries, an IOM is an RPG program which is executed any time an RDML function performs I/O over a file. On the iSeries, an OAM is a C program which is executed any time an RDMLX function performs I/O over a file. (For you LANSA hackers looking for holes in the logic, I am intentionally ignoring *DBOPTIMISE in this example.)

Now let’s test. We create a trigger function that does not default trigger return code but that does test the value of trigger return code. The trigger is called for every insert, update, and delete. We have an RDML function that maintains the file by updating one particular record that already exists. We have an RDMLX function that contains code identical to the RDML function. Identical.

When you maintain the file from the RDML function, the file’s IOM is used. The IOM defaults the trigger return code to ‘OK’ and passes it to the trigger. When the trigger tests the value of the trigger return code, the test succeeds since the value is ‘OK’.

When you maintain the file from the RDMLX function, the file’s OAM is used. The OAM defaults the trigger return code to hex X’00’ and passes it to the trigger. When the trigger tests the value of the trigger return code, the function aborts because hex X’00’ is not an alphanumeric value and #TRIG_RETC is an alphanumeric field.

We’ve run this test dozens of times and it appears that although the IOM and the OAM both default the value of trigger return code, the default value in each is different. In the case of the OAM, it appears to be, in our opinion, an invalid value for the field.

To recap, our ERP system seemed to work fine in an RDMLX environment until we introduced RDMLX functions. As soon as they performed I/O, triggers were executed which immediately aborted. Because we have everything under commitment control, this caused the entire transaction to roll back. WAMs would have had the same result.

Resolution

It appears that our only option is to default trigger return code to OK in every trigger function and to update the templates to do that for all new triggers. Unfortunately for us, this task is not only huge, it is risky. We are in the process of automating this process of updating trigger functions in order to reduce that risk.

When asked about my profession, I often tell people that I am a software architect and that I use the LANSA tool set. Then I point them to my LinkedIn profile. The problem is that I’ve told them my professional title and the tools I use but I did not answer their question. I didn’t tell them what I DO.

Chew on this for a few minutes. When you are at your best and after blinking back into reality after hours of being in the zone, what did you accomplish?

Here are a few examples from my career.

A client wants to use the new LANSA tools and language but it is impossible using their old architecture. They asked me to replace the tracks while the train is in motion. Updating architecture is one area of passion for me! I succeed at high-profile, risky projects that update the development architecture. This gives my clients options they didn’t have before.

There is a code search utility over the proprietary LANSA data store that has produced complete and thorough results for more than ten years. It is solid and, most importantly, it is trusted. I created that. Building utilities that remove frustration from the lives of other developers makes me happy.

Most developers have learned nuggets of wisdom over the years and as a seasoned LANSA developer, I have my share. I enjoy sitting with developers and passing those nuggets around. It is like playing professional Mancala. I mentor.

Rather than picking up whatever tool was laying around, RPG team members and Rosa found notable advantages provided by the already in-house tool. For one, there is a great deal of data management done on the iSeries box. The team is very familiar with the data structure, and LANSA made use of all the existing data structures. LANSA also had an RPG-friendly way of calling RPG programs. It also allowed developers to take advantage of existing back-end practices that were working just fine, so there was no requirement to write code from scratch.

A great testimonial of an IT department bringing their iSeries RPG programming staff, who were smart and knew the business well, into internet development using the LANSA tool set.

Beyond Modernization is a trilogy of eBooks by Paul Conte. If you have signed up for LANSA email notifications or subscribe to the blog, you have no doubt have seen the info about this set of eBooks. This is my take on them. If you like, you can head to the Beyond Modernization web site to download all three.

Design

I was impressed before I read a word because the books are beautifully designed. The backgrounds, the bullet points and graphics, the important points that are set apart all look great.

My only design gripe, and this applies to technology articles in general, is with stock photos of people that look like posed mannequins. They only detract from the message.

Book 1: Prepare For Your Journey

I thoroughly enjoyed reading the first book. It was written directly to the LANSA architect in me.

Book one talks about IT game changers such as the explosion of mobile devices, ubiquitous internet access, and cloud storage. By becoming a member of this world, our iSeries shops can not only stay relevant but can excel.

“When you see what a company like Google has accomplished by pushing their use of these capabilities to the limit, you realize other companies — possibly some of your competitors — won’t be far behind. The potential for leveraging technology is enormous.”

My favorite part of the book was the discussion of what should drive IT business decisions. Make your customer happy! Paul gives some great suggestions such as providing sales and service to our customers any time any where on any device. Make sure our business is open 24×7. Applications ought to be localized and ought to handle multiple currencies.

He goes on to shore up his point that all solutions should be business-driven and not based on a particular technology. Don’t ignore the technology but make sure that we are implementing a business solution. I couldn’t agree more as long as the business is focused on the customer. Our customers require more of us. To keep them as customers, it helps tremendously to become a social enterprise.

In the section on Enterprise Application Architecture, he talks through how to implement an architecture that integrates the systems assets that we already have into this new world. This is an involved and detailed section that I really enjoyed reading.

The first book concludes with some kick-in-the-pants encouragement:

“But as I’ve tried to lay out in this book, there are sound ways to approach the problem. The most pivotal step is to consistently follow a business-driven strategy. This involves understanding what drives the need to transform your IBM i applications, both from your own enterprise’s business requirements and the dramatically changed technological landscape upon which your enterprise — and its competitors and partners — conduct business.”

Paul makes a lot of sense and the entire first book seems to have been written directly to me.

Book 2: Be a Savvy Traveler

Book two walks the IT manager and the project manager through the process of ensuring the modernization project is a success. He covers it from top to bottom and makes one key point about putting the right developers onto the project.

“A developer who understands your business, knows your current RPG programs inside and out, works well with other business units and delivers quality code at a reasonable pace is just who you need on the bus. And the right seat is one in the front row, helping establish your application architecture.”

There is significant risk in moving to a new architecture and the final portion of the book talks about mitigating that risk. Two pieces of advice mentioned are to stay focused on your business drivers and to change incrementally and iteratively. That seems like sage advice to me.

Book 3: Embark with Confidence

Book three stresses the importance of using an application generator when modernizing. It is written to the IT department that does not use one. It came THIS close to feeling like a sales pitch except no product was mentioned by name. For the RPG shop, it makes the case for an application generator streamlining the process of getting where you want to go. Paul details four scenarios to keep in mind when preparing to modernize:

The immediate need for exploiting new technology such as mobile devices.

The immediate need for workflow and integration.

The intermediate requirement for multi-platform support.

Long-term agility, productivity and quality improvements.

He then summarizes some best practices for transforming applications. This is a good list and I recommend reading through it.

Conclusion

If you are an IT manager, this entire series is a worthwhile read. It will make you think. If you are a developer, I suggest reading through the first and third books with an eye on pleasing the end customer, on keeping your skills current, and on proactively pursuing modern technologies.

After reading this series, I find myself wanting to read more from Paul Conte.

Straight out of college, I was the new kid on the block and was green; a newbie not realizing the part LANSA would play in my career. I remember being in the cube of a seasoned developer as he was editing a function. I remember watching when instead of using the standard insert line command he used “IP” to insert PRIOR to the current line. To me it was a brilliant Easter egg and I immediately incorporated it into my bag of tricks.

Then it happened. I couldn’t stop myself. I improved on it.

I created a single keystroke macro that inserted prior. Then I created another for the standard insert. Without realizing what had just happened, I became obsessed with removing friction from my workflow. And it all started inside LANSA’s green screen editor.

Today the tradition continues. Any task that happens more than twice gets automated. For any word with specific capitalization or that I have mistyped more than twice, I create a word replacement rule. It is a living utility that I constantly update and improve. I call it Underflow since its’ power is hidden from view and I have made it freely available to other developers.

I’ve been in the LANSA world for 17 years and so much of that time was spent in that silly editor.

Who can forget having to use the unformatted prompt (“U”) on some of those massive DEF_LINE commands? Or moving and deleting blocks of code with double Ms and double Ds. Or jumping to SEU and screwing up those long comment lines? Or freeform typing commands without the parameter names and then using <prompt> and <enter> on each one to make then look pretty.

Did you EVER change “Roll” to something other than 13? I did once.

So, my friend, it is with bittersweet anticipation that I work toward eliminating you as we move toward RDMLX-enabling our environments. Once we do, you become a read-only viewer for RDML functions.

I wonder what I will remember. I will remember the now-missing F21 key that was so engrained in my muscle memory that I could press it without thinking. I will remember that you never locked up or crashed like your upstart younger brother VL; not one crash in all those years. I will remember the countless hours of poring over code, getting frustrated AGAIN that I had to save (triple enter!) before indenting and coloring. I mean, really, why can’t I color the code while in edit mode? I will definitely remember using you to debug one function at a time. Ah, how quaint. I remember the excitement I felt when I finally understood how to use your report designer.

I have memorialized you. RDMLX code is supposed to be viewed only within Visual LANSA but we needed a way to view it on the green screen in production. So I built a viewer over Frodo (DC@FRD) that looks just like you. It includes indenting and coloring. They are turned on by default because it makes me feel kinda like a rebel.

LANSA has memorialized you. Whippersnappers these days don’t know what they are missing just using the compile button in VL. Using Full Function Check shows a window with access to the old screen and report designers. LANSA cloned them from your keyboard-driven interface and grafted them into a mouse-driven interface. They are the Franken-editor.

Over time we will convert more of our vintage functions to RDMLX to get more use out of the language. We will spend the majority of our time in Visual LANSA and you will fade into a fond but distant memory.

Today I am the seasoned developer mentoring the new kids on the block and showing them Eastereggs. They are straight out of college and green; newbies snickering at the boring but stable green on black and bragging how they’ll never have to use it because they have VL. I choose to sit quietly and smile knowing they don’t have the experience to understand the part you played.

Thanks to the new LANSA installer framework, the LANSA Composer install process for both the server and the client is straight-forward. (Update: added link) You answer a couple of questions, press enter, and wait. If that were the end, though, this article would be very boring and much shorter.

Where to Install Composer

There is one crucial question to answer before enter is pressed. Where will we install Composer on the server? Or more to the point, where SHOULD we install Composer on the server?

Keep in mind as we continue that a LANSA environment is the entire LANSA install, not a LANSA partition. We usually refer to the environment by the name of its’ program library: typically DC@PGMLIB or DCXPGMLIB.

There are two options. Install it into an existing LANSA environment or install it into a new LANSA environment. We wondered if there was a material difference between the two or if there was an official best practice so we asked around within the LANSA organization. Unfortunately, we received different answers from a few different people. Since we have LANSA Services in-house on a project that depends on LANSA Integrator, we asked them for a specific recommendation as a project requirement.

Best Practice Recommendations

Here are the recommendations that come directly from LANSA.

Install Composer into its’ own LANSA instance.

If your system calls LANSA Integrator directly, your system should have a dedicated Integrator environment in addition to the Composer Integrator environment.

LANSA says the reason for both of these recommendations is the ability to upgrade your LANSA environment and your LANSA Composer environment at different times.

Let’s dissect these recommendations in order to better understand the implications.

Dual Integrator Installs

In order to understand this recommendation, just watch the dominoes fall.

LANSA issues Integrator EPCs separately from all other EPCs, including LANSA for iSeries.

On occasion, Integrator EPCs contain updated JSM BIFs.

When the JSM BIFs are updated, both the Java code in the Integrator instance and the BIF code in all LANSA environments that use that Integrator instance must be upgraded at the same.

If the LANSA system were to use Composer’s Integrator instance then the EPC would have to be applied to Integrator, to the Composer environment, and to the primary LANSA environment all at the same time.

LANSA does not want to require their customers to upgrade their primary LANSA environment and their LANSA Composer environment at the same time.

So they recommend two Integrator installs. One is used by the primary LANSA environment so LANSA + LANSA’s Integrator are upgraded together. One is used by Composer so Composer + Composer’s Integrator are upgraded together.

Dual LANSA Installs

I have changed my stance on this and agree with LANSA that it is a good idea to decouple the primary LANSA environment and LANSA Composer in order to allow their upgrades to happen at different times. Since Composer and non-Composer EPCs ship separately, upgrading them separately is both attractive and easier.

It also mitigates the risk that comes from the off chance that a LANSA for iSeries EPC was not thoroughly tested in a LANSA Composer environment or that a LANSA Composer EPC was not thoroughly tested in a LANSA for iSeries environment.

Do you see that elephant in the room? His name is ‘How Do I Call Composer Processing Sequences From My Existing Lansa System In This Scenario’. It’s a good thing he’s an elephant or his name badge would never fit.

We know that calling between LANSA partitions, let alone environments, is frowned upon. The possibility exists that the LANSA environments are at different EPC levels. LANSA took these criteria and more into account when they created a new product designed specifically for this task. Many of you have probably used it already. It is the LANSA Composer Request Server.

LANSA Composer Request Server

Thanks to the LANSA Composer Request Server, there is a way to execute calls between the Composer environment and your primary LANSA environment. It appears to be a good solution but there are a couple of significant caveats.

Caveat #1

In order to call from Composer to a different LANSA environment, the destination environment should be multilingual. There is a way to work around the monolingual problem but it leads to a maintenance nightmare. So if the destination environment is not multilingual, the call through the Request Server will fail. How did we figure this out? Yep, our primary LANSA environment is monolingual. The infamous National Language (NAT) is what our system speaks.

Caveat #2

This is the big one. The only method for calling Composer from a different LANSA environment is to use the COMPOSER_RUN BIF. This BIF is not yet available and will be released as part of V12SP1. Once the EPC is available, you will have to upgrade your primary LANSA environment to V12SP1. There will not be an EPC for v11SP5. Just to make sure we are all on the same page, it is not possible to follow LANSA’s recommendation right now and your environment must be at V12 in order to apply the EPC. Ours is not.

Maybe you are in our situation of being caught by both of these caveats. If so, ‘Best’ practices have become ‘You Have GOT To Be Kidding Me!’ practices. Unfortunately for now there is little to be done. It is what it is. LANSA as an organization is just now coming to a consensus on this recommendation.

Our Decision

Because we make good use of LANSA Support, our decision is to follow LANSA recommendations even when it is difficult and frustrating. For the foreseeable future that means we can neither call Composer from our primary system nor call functions in our primary system from Composer.

Get in Touch

If you are dealing with LANSA issues and would like to discuss them, get in touch.