Wednesday, August 30, 2017

The Czech Republic and Slovakia DB2 Users Group (csDUG) is a Regional User Group (RUG) that was created 5 years ago.

On 23 November 2017, we will be hosting the 5th csDUG event, with 2 tracks!
The event is sponsored by IBM and CA Technologies.

This is a full day FREE conference with top industry speakers - an great opportunity DB2 users to meet industry experts !

Agenda of the conference

Location

IBM lab, building 4
V Parku 4,
148 00 Prague 4, Chodov
Czech Republic

Are you a qualified DB2 specialist? Do you want to discover what DB2 is or what it can do for you? Are you an experienced DBA in environments other databases and need support in DB2 for a new project? Are you interested in what role it can play a DB2 architecture of your business?
--> Join the November 23 csDUG conference to listen to industry experts and network with other people interested in DB2!

If you register before September 15th ... and if you never attended an IDUG conference, and you are interested in receiving a FREE pass to IDUG (Lisbon October 1-5), please mention it in the email.

Become a member of the virtual csDUG group at http://www.worldofdb2.com/group/csdug (this helps us to calculate the budget for the complementary lunch). If you are not already a member of the WorldOfDB2 website, you will need to register first)

RC/Query has defaulted the “space L”
functionality by aiming for BMC users.

Now on, users don’t need to type “space L” to
know any other attribute.

For ex: From Table list screen, to know
Database, users need to type “DB L” but from now on just type DB to get the
database name.

Same thing is applicable for other reports and
other space L commands in other reports.

However, the existing functionality is NOT
affected. So users can get the same database name with “DB L” and also DB to
get the database name.

Shortening START,
STOP and DISPLAY commands

RC/Query has come-up with shortening the START,
STOP and DISPLAY commands.

From now on, users can use STA, STO and DIS
commands along with START, STOP and DISPLAY commands to start, stop and display
the object(s) status.

Enabling Case
sensitivity in RC/Query header

For example: A table creator can be in upper and
lower cases. What happens when the user wants to fetch the records of only the
lower case creator field?

RC/Query has enhanced to use the case sensitive
values in all the RC/Query report’s header fields (currently its limited to
only Item name – and now its enhanced to all other fields of RC/Query reports.

With this, users can fetch the data pertaining
to the exact case without any compromise.

Reset Header
command to clean up the header fields in RC query report

Current Challenge:

oWhat-if
I want to remove the value of “Object name” field?

oWhat-if
I want to remove the value of all the header fields of RC/Query report?

Solution:

oRC/Query
has come-up with a new feature to reset the header part of all of its report’s
header fields.

oRemoval
can be adjusted to any only “Object name” field or all other fields of RC/Query
header.

oUse
RESHDR to reset the value of “Object name” field

oUse
RESHDR ALL to reset the value of all the fields of current RC/Query header
report.

DB2 Analytics
Accelerator (IDAA) Line commands

Below commands are included to support DB2 Analytics
Accelerator:

oPING

Verifies whether the IP
connection between DB2 for z/OS and the IBM DB2 Analytics Accelerator is
available.

oACCALT

Alters the distribution
keys, and organization keys of one or more DB2 tables residing on Accelerator,
according to your specifications.

oALOAD

Loads data from one or more
source tables in DB2, to accelerator.

oENARPL

Enables replication updates
for one or more tables, on accelerator.

oDISRPL

Disables replication
updates for one or more tables, on accelerator.

oDACCELF

Removes the table from DB2
analytics accelerator forcefully.

oRESARCH

Restores the archived table
partition data from DB2 accelerator, to their original locations, according to
your specifications.

Intelligent Use of
Command Utility

Intelligently identify the object name on which
the utility needs to be executed

Thursday, August 10, 2017

Enterprise data are subject to various regulations depending
on their geographical location and type of business. An increased effort is
expected and mandated to respect those rules, typically meant to better secure
and protect the accuracy and privacy of enterprise data. In various
regulations, it is also expected to actually demonstrate Compliance, which is
not a piece of cake.

In addition, most people think
that external threats (such as an external hacker trying to access corporate
data) are the most common data security issues. In reality, various studies
have shown that internal threats comprise 80% of all security threats. In other
words, companies should make sure to protect their corporate data against their
own employees.

Examples of
regulations

Sarbanes-Oxley Act (SOX)
: The goal of SOX is to regulate corporations in order to reduce fraud and
conflicts of interest, to improve disclosure and financial reporting, and to
strengthen confidence in public accounting. Specifically, the section 404 of
this act, the one giving IT shops fits, specifies that the CFO must do more
than simply vow that the company’s finances are accurate; he or she must
guarantee the processes used to add up the numbers. Those processes are
typically computer programs that access data in a database, and DBAs create and
manage that data as well as many of those processes.

Health Insurance Portability
and Accountability Act (HIPAA) : This legislation contains language
specifying that health care providers must protect individual’s health care
information even going so far as to state that the provider must be able to
document everyone who even so much as looked at their information. Aka. can a
DBA produce a list of everyone who looked at a specific row or set of rows in
any database ?

Payment Card Industry &
Data Security Standard (PCI DSS) : This well-known standard was developed
by the major credit card companies to help prevent credit card fraud, hacking
and other security issues. A company processing, storing, or transmitting
credit card numbers must be PCI DSS compliant or they risk losing the ability
to process credit card payments. Given the availability and volume concerns of
payment card transactions this information is typically stored in an enterprise
database.

General Data Protection
Regulation (GDPR) : This new regulation applies to organizations that do
business in the European Union, and will be effective in May 2018. It is meant
to strengthen and unify data protection for individuals within the European
Union, but it also focuses on the export of data (or even accessing the data)
outside the EU. The stated objective of GDPR is to return control of personal
data back to the individual. This includes data retention requirements, data
privacy rules and huge penalties for being out of compliance.

Personal Information
Protection and Electronic Documents Act (PIPEDA) : This Canadian regulation
specifies the rules to govern collection, use, or disclosure of the personal
information in the course of recognizing the right of privacy of individuals
with respect to their personal information. It also specifies the rules for the
organizations to collect, use, and disclose personal information.

Demonstrate
Compliance!

It’s (almost) as simple as a 1-2-3 process!

Step 1 to Data Compliance : Define Data Compliance
for your business

Depending on the type of corporate data you own, the type of
business you are in, and the geography you do business with, the regulations
you want to comply with will be different. And the definition of Personal
Information to protect will be different!

As a typical example, the format of social security numbers
is different from one country to another. If you do business in Czech Republic
(for example), the social security numbers (Rodné číslo) have a specific format

[0-9]{2}[0,1,5][0-9][0-9]{2}/?[0-9]{4}

Step 2 to Data Compliance : Locate the sensitive
personal data

While most companies understand the need to comply to
regulation(s), a typical challenge is to determine where all the sensible
personal data are actually located within the corporate data.

When you have defined what kind of data you are going after
(Step 1), the challenge is to make sure you know where those are stored : where
are those “Rodné číslo” in the corporate data ?

You may think you know where all these are stored, but … are
you sure? Remember: the goal is to demonstrate compliance, so you better be
sure you know exactly where all those “Rodné číslo” are stored.

When you know what personal data you are going after, and
you know where they are located, the game is to make sure the authorizations
and security settings are defined properly, so that only the individuals that
must have access to it… have access to it.

In other words, you need to produce a report that clearly
states what personal data are where, and who has access to it.

Find and control regulated mainframe data and classify for
compliance with

CA Data Content Discovery (DCD)

Compliance and adherence to regulations is critical to help
prevent data breaches.

By discovering where the data is located, classifying the
data to determine sensitivity level and providing comprehensive reporting on
the scan results, mission essential data can be protected and exposure risks
can be mitigated.

CA Data Content Discovery (DCD) comes with a number of pre-defined
classifiers out-of-the-box, to comply with various well-known regulations.

In addition, CA Data Content Discovery (DCD) can be
configured to look for sensible industry-specific or country-specific data in
your corporate data, aka. you can create custom classifiers such as a
“Rodné číslo” (as discussed above) :

SQL Adria is a DB2 Regional User Group for Croatia and Slovenia, founded 20+ years ago. This non-profit organization organizes conferences and seminars, as a mean to continuously provide technical education, to share knowledge, to exchange ideas and experience among users and vendors.

Those events are regularly attended by dozens of DB2 Users, both DB2 Administrators and DB2 Application Developers.

The SQL Adria 2017 summer event happened in Šibenik, Croatia from 11th June 2017 to 15th June 2017.Sessions during the SQL Adria Seminar

Friday, June 9, 2017

One year has passed and here we are again planning the next,
for your favorite conference csDUG!

But what would this event be without presenters and without
all the discussions on a given topic? Let’s again create a unique atmosphere,
the same as in previous years and let’s get carried away by the interesting
topics that DB2 offers. This year, the conference will take place on November 23rd 2017 (Thursday) in The Park, Prague 4 - Chodov.

We are opening a call for presentations on this
event. Contact me to sign up, it is enough to state
the title of your presentation and also short description of the main
objectives. The presentation should not be longer than 45 minutes. The presentation
is usually followed by short discussion. The call for presentations will be
opened until 15th August 2017.

Thursday, June 1, 2017

Reducing Collection
Overhead

When it comes to DB2 performance products, customers are
often demanding reduction in the overhead associated. Available in CA DB2 Tools 19, CA Subsystem Analyzer now allows
to activate Getpage Sampling feature.

Sampling getpage requests reduces collection overhead
significantly. Instead of capturing all getpages for databases, tablespaces,
tables, indexes, datasets, and dataset extents, the percentage that you select
is sampled. With sampling enabled, getpage count values are approximated. The
sampling process is based on proven sample size and correction for finite
population calculations using a confidence level of 95 percent and a confidence
interval of 1 percent. Therefore, the approximated values are within 1 percent
of the actual values 95 percent of the time when sufficient getpage activity
occurs during an interval.

Watch the video

This feature is explained in a video, available in YouTube :

Recommended
Sampling Rate

3 percent sampling rate results in the highest reduction of
collection overhead. However, accuracy must be considered when choosing a
sampling rate. The table below shows the recommended sampling rate for the
number of actual getpage requests per object that you expect to occur per
interval. Using the recommended sampling rate ensures accurate getpage counts.
For example, if you expect 30,000 or more getpages for each object, specify 25.
One out of every four getpages is sampled.

Friday, May 26, 2017

If you just finished university with a degree in Computer Sciences, you are probably looking for a job… and if you are reading this article, you are probably still searching. Although you might not even know what a Mainframe is, you might want to consider a career on the “big iron”; and here is why.

The mainframe is a 40+ year old platform and most software is written in low-level languages such as Assembler or Cobol. Granted. As much as it appears non-attractive, it is a real opportunity: while most of the world data and processing resides in Mainframes, Mainframe professionals (so called “Mainframers”) are close to retirement. The equation is simple, IT talents with such knowledge will be a rarity in the very near future, and the biggest fortune companies will crave for them.

But … do not think Mainframe is solely legacy. In fact, lots of new projects exist on Mainframe, most of which use Java, C, or C++. The new trend of “virtualization” is a notion which has existed for decades in the Mainframe world. If you think about it, Mainframe systems are nothing other than a private cloud: Mainframe means an enormous amount of data, incredible processing capabilities, and very high security (who ever heard about a virus on a Mainframe?). Mainframe also rhymes with green computing, since it uses much less energy than other platforms, because one Mainframe can support a workload equivalent to thousands of distributed servers.