October 2014

IBM i

09/03/2014

Well, it’s been a while since my last blog. I’ve enjoyed some vacation, relaxing with friends and family while I recharge my batteries. I’ve also started the annual Fall Plan and traveled to talk with a few customers, so the blog had to take a back seat for a while. But I’m back with another post in the series on the process of modernizing applications on IBM i.

Bellamy Software, a division of Sylogist Ltd., is a software solutions provider based out of Alberta, with solutions that have “streamlined business processes for municipalities, school districts and the private sector for over 30 years,” to quote their website.

My source this time is Derek Lutz, Director of Business Development - Public Sector, who was gracious enough to answer my questions.

Q: How are you bringing new customers to IBM i and Power Systems?

A: By hosting our application in the cloud and offering a SaaS solution. This brings down the cost for small and medium sized customers by removing the cost of architecture acquisition, maintenance and support. We have found that over 90 percent of our new customers are choosing our cloud solution. Our applications have always been deep, powerful, robust, comprehensive and integrated. The application functionality was fundamentally meeting the needs of their customers and when combined with cloud distribution and a modern, intuitive interface we find that new customers are very excited.

Q: Can you share some examples of customers doing new things with your solutions on IBM i?

A: We now set up most customers to receive their reports via email on the SaaS server thus reducing the amount of time we need to train users to work with output queues. We find the majority of our customers prefer this method as they can decide whether they need to physically print or they may simply save the PDF.

Q: Is modernization necessarily a long process? How much can be done in a short time?

A: We worked with our vendor to produce a working prototype in about three weeks. Of course delivering a production system took a bit longer but we have a development staff that is motivated and anxious to learn new technology and techniques. We use Newlook from looksoftware for our graphical needs. Newlook also supports our workflow and integration enhancements.

Q: Your product solves business problems for a specific industry or customer set. How does the modernization work you’ve done help satisfy those customers?

A: It’s easier to train new customers as many of them are already familiar with Web applications and not so much with text-based screens. Our cloud solution is fundamentally leveraging the application functionality that has been serving our customers well for many years. Modernization has allowed us to deliver this functionality in a manner that exceeds the very high expectations of end users today.

A: We leveraged IBM i servers because of their power to support a large volume of customers and end users. The scalability, reliability and security of IBM i makes it the perfect platform for cloud-based applications.

We leveraged IBM i servers with the power to support a large volume of customers and end users. The servers themselves are hosted in a third-party facility that ensures maximum performance, bandwidth and 24/7 reliability.

Q: If you could give one message to the IBM i community about the value of modernization, what would that be?

A: Modernization does not have to be daunting and IBM i can be a proud member of your modern IT infrastructure.

--------------

Wow. “The scalability, reliability and security of IBM i makes it the perfect platform for cloud based applications.” This is exactly what many of our ISVs are finding. While many people think of “modernization” as being purely about converting “green screen” to something else, ISVs like Bellamy really “get it.” They are taking on the world of commodity servers using less proven solutions by extending the value of their investment into the cloud. And they are succeeding. This is one of the messages I take with me all around the world as I explain the evolution of the IBM i platform, and its directions.

I want to thank Derek as well as Brendan Kay from looksoftware, who helped me connect to Derek. I am really enjoying the specific stories I hear as I do this series, and I hope you find value in them as well.

04/28/2014

I have the privilege of highlighting the themes for the release, but you are going to want to read about more than highlights, so I am also going to point you to other sources of great information.

In fact, let me start there. In a blog I wrote in February, I talked about the new IBM Knowledge Center. That repository for all IBM products is now available and has been receiving excellent reviews from users. Well, as you might expect, the 7.2 release documentation has a home in Knowledge Center. The URL is http://www.ibm.com/support/knowledgecenter/ssw_ibm_i_72/

Now let’s get to the 7.2 themes and some of the technology behind them. I’ll open with the chart we’re using to introduce the release.

Our themes are grouped into two major focus areas. First, we have a list of themes that related to delivering a great platform for today’s solutions. Mobile devices, cloud delivery models, advanced middleware – IBM i 7.2 delivers function that enables all of these. And, in conjunction with the rest of Power Systems, we announce support for the first POWER8-based systems. You’ll be able to find a lot of information about new Power Systems, and we will have more information about how IBM i takes advantage of the new architecture in the future, but that’s not the focus of our IBM i bloggers today.

Our blogs for announcement day focus on Integrating Advanced Technology, the second focus area on the chart.

One of the concepts we’re focused on, and we’ve invested heavily in with recent releases, including 7.2, is enabling data-centric design. Mike Cain talks about that in his blog (http://db2fori.blogspot.com/) and in particular how the new DB2 Row & Column Access Control security functions fit into that approach. Giving DB2 the responsibility of enforcing security, which is part of data-centric design, allows you to remove complexity from your applications and administration, while helping to ensure that you don’t miss anything as you enforce security policy for your organization.

IBM i 7.2 also has a theme of managing your system more easily. As you might expect, these topics fall into the areas covered by Dawn May in her blog (http://ibmsystemsmag.blogs.com/i_can/).

Additionally, Tim Rowe will describe how that management is implemented in Navigator, but he’s also talking about the tech preview of IBM i Mobile Access – a management tool you can run on your smartphone and other mobile devices. Tim starts at its new home for this announcement: blogs.systemideveloper.com

As I said above, in one single week, we can’t cover everything in a major release. If your favorite topic is not covered by one of us in IBM, I encourage you to look at blogs and articles written by others who have dug into the details, or go to Knowledge Center or developerWorks and look around. If you have a chance, you can also join us at the COMMON Annual Conference in Orlando in just a few days, because several of us will be at the conference teaching people more about 7.2.

But if you can’t do any of those things, well, just wait a few weeks. We are certain to have more blogs about this major release for the next several months.

04/08/2014

Today, April 8, is announcement day for IBM i 7.1 Technology Refresh (TR) 8. That’s right, we are now the proud parents of eight semi-annual collections of new functions which have been added to the most powerful release we’ve ever shipped. When we shipped 7.1 four years ago, launching our new strategy of focusing on mid-release delivery of new function, we certainly expected it would provide a useful, non-disruptive means for our clients to adopt new technology. It has done that. What’s been particularly satisfying is to see the rate at which TRs are adopted.

When the first TRs came out, of course, fewer people had moved to the IBM 7.1 release. So we knew they would be adopted by fewer people than later TRs. This was expected. What we didn’t know how to predict, though, was how fast people would adopt them. In those first couple of TRs, the adoption rate was steady, but low–well under 1,000 per month. For the more recent TRs, within the first couple of days after GA, we have thousands who have downloaded the TRs. And the rate from there is very steep.

The high rate of download demonstrates a few things:

There are a great many clients on 7.1, and that number is still growing steadily. We know this from other sources, of course, but TR adoption data is actually the fastest and easiest way for me to see the rate. We still have to extrapolate a bit—not every 7.1 customer adopts TRs, and among those who do, they don’t all adopt every one of them.

The community, as a whole, trusts the quality of the TRs. If they didn’t, I would expect to see a slow ramp-up to adoption of each new TR. Instead, the rate of adoption is increasing.

The word about TRs has reached a “critical mass” of customers and business partners. When we first introduce TRs, we realized we were going to need to spread the word about them. Our customer set had been conditioned for decades to think of PTFs as almost exclusively a “defect fix” mechanism. For this reason, many of them had rules about PTF application which slowed down adoption of TRs, which are delivered as a PTF Group together with a Cumulative PTF package. But now it is clear that most clients recognize the difference between PTFs which are “just fixes” and the TR deliveries.

So, what’s in the latest TR8 announcement? Several things, actually. There are new native attachment options (“native” meaning they do not require VIOS) related to 16 GB Fibre Channel and some SAS drawers. (That’s as much “hardware” as you’re ever likely to get from this blog author!) The WebSphere Liberty profile is now the base for the IBM i Integrated Application Server. And of course, DB2 continues to satisfy requests from users and ISVs, particularly in the areas of performance analysis and database engineering. One quick note, though – the actual GA for TR8 content is June 6, and the GA date for the other functions announced with the TR (such as the DB2 functions) is listed in the information for those functions. Check out the details on developerWorks, and in the several blogs and articles our IBM i team writes or helps produce.

The success of our Technology Refresh strategy has been quite satisfying, and the strategy will continue. But, as I have said before, we definitely still need major releases. So watch this space as April continues. There just might be more news before the month is over.

02/24/2014

Are you a database (DB) professional? If so, you can probably skip the blog this time.

Are you an IT professional who doesn’t get into the details of DBs? Then this blog is probably for you.

Are you someone other than an IT professional? Then I think you might have the wrong blog. Or you’re a member of my family. In either of those cases, thanks for the thought, but you should probably go read something else.

OK, I think I’ve reduced the audience now to people who might care about today’s topic. See, if you are a non-DB IT professional, you know that DBs are important, and you probably wish you knew a little more about one of the hot DB topics being discussed in IT these days.

That’s kind of where I was a while back. I freely admit to most anyone that, while I have worked on many (many!) parts of the IBM i operating system, I have never been a developer in the DB2 area. So I regularly need to ask some of the smart people on the DB2 team to give me some semi-advanced lessons on databases. In particular, I recently needed to hear about why so much of the industry is talking about “columnar” databases. So, I went to chat with Mark Anderson. Mark, as many of you will know, is the Distinguished Engineer who is the Chief Architect of DB2 on i. Today, I thought I’d share some of what I learned.

Many database conversations center on performance, and that’s true with our topic today. People are talking about the performance of the newer column-oriented DBs, in comparison to the traditional row-oriented DBs.

As their names imply, there is a difference between how the data is stored in each of these DB architectures. In row-oriented DBs, each piece of data in a row is put on disk very close to each other piece in that row. Typically, you can think of it as if a row is a continuous set of bytes all sitting on disk. What’s stored in a row? Well, if you remember from DB 101 (you did all take the intro DB course, right?) a row typically has all the information about some entity. For purposes of this discussion, let’s say a row contains information about a customer. Then all of the attributes of a single customer are stored together on disk. Each of those attributes is a column in the virtual table (or array) we’re storing. I think this will go better with an example.

On disk, you have a large number of pages storing all this data. Let’s say that each row in the DB is large enough to take up one page of storage. In this case, that’s one page for each customer. OK, so given the example above, in a row-based DB, all of Alice’s information is stored in one page. If your application deals with getting information about Alice, or updating information about Alice, then all you need to have in memory at one time is that single page.

Though I haven’t said it yet, you can probably imagine that in a column-based DB, all the data in a column is stored together. Again, let’s say in our example, a column takes a page. So, for example, all of the ZIP codes would be on one page. And all the values in “2013 Total Order” would be on another page.

(And hey, all you DB experts are supposed to have stopped reading, so I don’t want any smart comments about the design of the specific customer table above. I’m not a DB architect! This is a teaching example.)

OK, so now we can get to the point.

All databases (all computing, in fact) work fastest when the data they are working with resides in memory when the work is being done. Of course, data doesn’t normally sit in memory. It’s stored somewhere, and brought into memory when it’s called by the DB. So, if your database is row-oriented, it’s going to have the best performance if you use it in a way that requires you to operate on a row at time. In our example, that brings one page into memory. (This is just a simple conceptual example, remember. In reality, operating systems bring more than one page at a time into memory. More about that later.)

On the other hand, if your DB is row-based, but you need to get all of the data stored in one column in order to do some operation, then you will have to go access each of the rows, and that means spending time bringing in each row to grab a small piece of data. If you had a column-oriented DB, you could get all of that data more quickly, because it’s all stored together. From the example, all that “2013 Total Order” data is in one column.

But – and this is key – if your DB is column-oriented and you wanted all the data about Alice, you’re going to have to bring in a lot of columns (pages) to get all of that data.

So there is the crux of why there is so much discussion about columnar DBs these days.

Oh, you still don’t quite get it? Well then, let’s go a bit further.

You see, until recently, almost all major databases were row-based. And, for most traditional business processing, this makes perfect sense. Most DB workloads have been what DB experts called OLTP workloads – On-Line Transaction Processing. OLTP workloads are the backbone of most existing business use of DBs. And these applications typically perform best with row-based DBs, because those workloads tend to need many attributes (values stored in many columns) about a single entity. They work with rows. Row-based operations bring in the minimum number of pages from disk when using a row-based DB.

But there are other workloads that perform better if the data in columns can be handled efficiently. Online Analytical Processing (OLAP) workloads frequently want to gather data from columns. For example, if you wanted to know the total of all the values in the column labeled “2013 Total Order” and you had a column-based DB, you would get that by bringing one column into memory and totaling the values, whereas in a simplistic row-based DB, you would need to get data from each row.

Of course, it’s not as simple as all that. Row-based DBs, such as DB2 for i, have added methods to make tasks such as the simple “sum a column” example (or much more complex OLAP analytics) perform well. For example, DB2 for i has couple of constructs called Encoded Vector Indexes (EVIs) and Materialized Query Tables (MQTs) that can be used to great effect. And DB2 for i has ways for customers to block (read many rows at a time) instead of reading them one at a time. In addition to that user-defined method for improving performance, the storage management component of IBM i is smart. Very smart! It has to be; it’s implementing single-level storage. Anyway, because it is so smart, it recognizes that a user is reading rows and then brings data in memory even before it’s requested. This becomes important because many OLTP workloads require a sequential “walk” through the rows, so bringing blocks of rows gets them in memory ahead of when they are needed.

On the other hand, you can’t just add an index to a column-oriented table to get good OLTP performance. As Mark says “It’s a bit like putting Humpty Dumpty together again. The pieces of the row are scattered on many pages of storage. Even if the entire database is in memory, a lot more CPU will be expended pulling all the columns of a row together.”

So, to wrap up this little lesson, the key to deciding which kind of DB to use is to ask yourself what kind of workload is most critical for your DB to support. Though you might want to do both kinds of operations, if your core business needs OLTP, then a row-based DB, with decades of performance optimization, might be the best choice. If your company doesn’t really need fast OLTP, but needs OLAP to be fast, then a columnar DB might be best.

And if you need both, then you will want to look at the mix of techniques available on each. You can decide to go with one, or you might even decide that a combination of the two architectures will suit you best.

Whatever you decide, just make sure that the solution is stable, because businesses really do run on their data.

And that’s it, readers. Database 102. Now, when you read those articles about columnar DBs, you’ll understand why people are talking about them.

01/27/2014

Happy 2014, IBM i community. I am having trouble believing the month of January is almost over. Things have been very busy around here already.

We are not ready to announce IBM i 7.2 yet, but the pre-announce activity has been rolling along. Last week I gave a preview of the release to some customers and partners in “Early Ship Program” – customers from India. Then I talked to two different large customers about IBM i Trends & Directions, and they received a short preview as well. (Yes, non-disclosure agreements are in place, in case any IBM executives are reading this. No worries there!)

In my recent customer visits, I have been reminded once again what a “double-edged sword” we have created as we’ve allowed customers to remain on older technology without forcing them to modernize.

I have seen customers that are growing rapidly who are inhibited by database designs that are older than the AS/400. Despite the relational database capabilities that have been a part of this operating system for more than 25 years, they are still running their businesses using older flat-file or hierarchical designs. Now that they are growing, and trying to interoperate with more modern technologies such as Web services, they are hitting limitations that will hamper their ability to handle that growth.

Most of the customers I have seen recently who are encountering this issue are undertaking projects to restructure their data and applications to allow for their present and future needs. Some people like the word “modernization” and some don’t. Whatever you call it, this is something people in IT really need to see as part of the lifecycle of business software. You can ignore it for a while – sometimes for a long while, if you’re using an operating system that continues to support technologies as long as IBM i does – but eventually you have to deal with the changing requirements of your business processes.

That’s why “Modernization” is going to be a big topic for this year. And it’s why we’re about to publish the modernization Redbooks publication. Jon and Susan mentioned it in their blog about Predictions for 2014. Tim talked about it in his iModernization blog last fall. Soon, we’ll be spreading the links around as the book gets published. I have been looking forward to this set of material for a long time, and I am excited that it’s almost “ready for prime-time” because it’s something customers need. We all know that the modern IBM i platform has much more capability than the systems we sold in the ‘90s. We just need to use it that way!

IBM Systems Magazine is a trademark of International Business Machines Corporation. The editorial content of IBM Systems Magazine is placed on this website by MSP TechMedia under license from International Business Machines Corporation.