Wednesday, April 24, 2019

Why LDAP?

LDAP stands for Lightweight Directory Access Protocol. It is based on a client server model. A client queries LDAP server, which responds with an answer or with a pointer to where client can get more information. Most organizations have LDAP set up and configured for managing users and their credentials for internal applications. Using LDAP users can have single sign on for most applications.

When using LDAP with MySQL for authentication - the password management is offloaded to the LDAP service. Users that don't exist within the directory cannot access database. Password management functions such as enforcing a strong password, and password rotation can be off-loaded.

How to configure LDAP with MySQL?

If you are using Percona XtraDB i.e. Percona's flavor of MySQL, this can be easily done using the Percona PAM authentication plugin.

Installing Percona PAM authentication plugin

Percona PAM Authentication Plugin acts as a mediator between the MySQL server, the MySQL client, and the PAM stack. The server plugin requests authentication from the PAM stack, forwards any requests and messages from the PAM stack over the wire to the client (in cleartext) and reads back any replies for the PAM stack.

To install the plugin, verify that the plugin exists in the plugin directory. Then,

LDAP authentication can return to MySQL a user name different from the operating system user, based on the LDAP group of the external user.

For example, an LDAP user named stacy can connect and have the privileges of the MySQL user named dba_users , if the LDAP group for stacy is dba_users.

Creating user defined outside of MySQL tables

mysql> CREATE USER 'yashada'@'%' IDENTIFIED WITH auth_pam;

Query OK, 0 rows affected (0.04 sec)

mysql> GRANT SELECT ON ycsb.* TO 'yashada'@'%';

Query OK, 0 rows affected (0.02 sec)

mysql> FLUSH PRIVILEGES;

Query OK, 0 rows affected (0.01 sec)

mysql -u yashada –p

Enter password:

Welcome to the MySQL monitor. Commands end with ; or \g.

Creating proxy users

If belonging to a group needs certain MySQL privileges, setup proxy users instead to map a user’s privilege to its defined group.

A good example of this is a DBA group, or a reporting_users group that needs read_only access. This offloads management of removal and addition of new users from the MySQL DBA and passes it on to LDAP.

Tuesday, January 29, 2019

In the DBA operations world there are always challenges with passwords. How do you use the password for login, automation and operation scripts in a way that does not expose the password.

One of the MySQL utilities that addresses some of these questions is mysql_config_editor.

mysql_config_editor enables you to store authentication credentials in a login path file named .mylogin.cnf. On Linux these credentials are stored in the current user's home directory.

Installing mysql_config_editor

To install mysql_config_editor, all you need is the MySQL client installed. It can be particular to the MySQL flavor you use. I use Percona MySQL, so installing the Percona-Server-client rpm is enough for access to mysql_config_editor.

Using the password in scripts

This means that we can run scripts without providing the password. For example, if we were writing a script to take a backup or promote a slave, that will run on the database server. Here is an example -

[root@ ~]# mysql --login-path=scripts -e "show slave status \G"

Slave_IO_State:

Master_Host: XXXX

Master_User: XXXX

Master_Port: 3306

Connect_Retry: 60

Master_Log_File:

Read_Master_Log_Pos: 4

Relay_Log_File: mysql_relay_log.000001

Risks and Caveats

While this is a comparatively better way of logins for automation scripts it is still not secure. This is the reason for the "supposedly" in the title of this post. While this is a convenient way, it is by no means completely secure.

[root@ ~]# mysql_config_editor print --all

[mysqlconn]

user = root

password = *****

host = localhost

[monitor]

user = monitor

password = *****

host = XXXX

port = 3306

We can read the contents of this encrypted file using the my_print_defaults utility.

[root@ ~]# my_print_defaults -s monitor

--user=monitor

--password=B*kA2aBntGYdvJaf

--host=XXXX

--port=3306

my_print_defaults is a part of standard MySQL install.

For this reason even though mysql_config_editor saves the password in an encrypted file, it is recommended that this only be used for "root" linux user, and the logins saved in the mysql_config_editor be restricted by 'user'@'localhost'. The root access in these servers also needs to be tightly controlled and regulated.

Wednesday, December 19, 2018

As applications mature, tables tend to grow infinitely,
specially the time series ones. They have tens of millions of rows and run up to
a few TBs in size. Even though the tables have so much data, applications only
need to access data that was recently saved. (Say within, the last year or so).
A trivial task of adding a new index or changing the type of a column for such
a humungous table, becomes a very painful task. I was charged with such a task
recently and that got me thinking about archiving strategies.

Archiving is a good practice to keep the data to a required
working set, while at the same time, maintaining the data that might be needed
for legal or compliance reasons, or for certain less frequent workflows in the application,
like looking at the 10 year purchase history of a customer.

Though archiving seems like a database problem, it cannot be
done in vacuum. It needs buy in from application, legal and compliance to know
what the archiving boundaries are, what are the access patterns and
availability requirements for the archived data.

But solving for the database problem is something right up
my alley, so I thought of testing a few approaches. One of my favorites for the
ease of use is the "EXCHANGE PARTITION"
feature for 5.7.

In MySQL 5.7, it is possible to exchange a table partition
or subpartition with a table using –

ALTER
TABLE pt EXCHANGE PARTITION p WITH TABLE nt

where

pt is the partitioned table

p is the partition of pt to be exchanged

and nt is a non partitioned table.

Privileges
needed for the statement is the combination of ALTER
TABLE and

TRUNCATE TABLE on both tables.

This provides a great opportunity for archival of
partitioned tables.

Consider an online retail store, that logs its invoices or
orders in a table. Here is an over simplified invoice table for the store.

CREATE TABLE `invoice` (

`id` int(10) unsigned NOT NULL
AUTO_INCREMENT,

`invoice_num` int(10) unsigned NOT NULL,

`stockcode` int(10) unsigned NOT NULL,

`invoice_date` datetime NOT NULL,

`price` decimal(10,2) DEFAULT NULL,

PRIMARY KEY (`id`,`invoice_date`)

) ENGINE=InnoDB

PARTITION BY RANGE
(YEAR(invoice_date))

(PARTITION p2009 VALUES
LESS THAN (2010) ENGINE = InnoDB,

PARTITION p2010 VALUES LESS THAN (2011) ENGINE
= InnoDB,

PARTITION p2011 VALUES LESS THAN (2012) ENGINE
= InnoDB,

PARTITION p2012 VALUES LESS THAN (2013) ENGINE
= InnoDB,

PARTITION p2013 VALUES LESS THAN (2014) ENGINE
= InnoDB,

PARTITION p2014 VALUES LESS THAN (2015) ENGINE
= InnoDB,

PARTITION p2015 VALUES LESS THAN (2016) ENGINE
= InnoDB,

PARTITION p2016 VALUES LESS THAN (2017) ENGINE
= InnoDB,

PARTITION p2017 VALUES LESS THAN (2018) ENGINE
= InnoDB,

PARTITION p2018 VALUES LESS THAN (2019) ENGINE
= InnoDB,

PARTITION pMAX VALUES LESS THAN MAXVALUE
ENGINE = InnoDB)

The queries on this table only need data from a year ago, to
be available for queries, however for compliance and other non-frequent
workflow we need to maintain older data. The EXCHANGE
PARTITION gives us an opportunity to archive partitions, with quick DDL
operations.

Consider the need to archive 2010 data, which is in p2010 partition.

mysql> select count(*)
from invoice PARTITION (p2010);

+----------+

| count(*) |

+----------+

|1111215 |

+----------+

1 row in set (0.38 sec)

To use EXCHANGE PARTITION
the table to be partitioned pt and the
non-partitioned table that the data will be archived into nt need to meet a few requirements -

-The non-partitioned table needs to have the same structure
as the partitioned table.

-It cannot be a temporary table

-It cannot have foreign keys, or have any foreign keys
that refer to it.

-There are no rows in nt
that lie beyond the boundaries of the partition p.

Lets create the table invoice_2010
which will archive all invoices from 2010.

In its current implementation a row by row validation, does
a full table scan on the non partitioned table to evaluate if there is a row in
the table that violates the partitioning rule. From the open worklog, it seems
like there are plans to have the command use index instead of a full table scan
but it isn't implemented yet. A workaround to this is using WITHOUT VALIDATION.

To avoid time consuming validation when exchanging a
partition with a table that has many rows, it is possible to skip the
row-by-row validation step by appending WITHOUT
VALIDATION to the ALTER TABLE ... EXCHANGE
PARTITION statement. However with this the onus lies on the engineer to
verify that no partitioning rules are getting violated.

-Verifying that the metadata matches (i.e both tables
have the same structure)

-If WITHOUT VALIDATION is not used, verifying data in
the non partitioned table

-Upgrading to a exclusive metadata lock for both tables

-Rename non-partitioned table to partition and partition
to non-partitioned table.

-Releasing the metadata locks

It would have been nice if it was possible to append to the non partitioned
table, rather than a exchange, but my guess is then it wouldn't be a metadata
operation. The same can be achieved by an exchange to an intermediate table, followed
by copy into the archive table.

As pertains to previous example, this would involve creating
a table invoice_2010_intermediate. Exchanging
p2010 from invoice table with invoice_2010_intermediate and doing a copy
from invoice_2010_intermediate to invoice_2010.

But for what it does, I think it is a delightful approach to
archiving data that you still need on your database server, for reading and
archiving, but it doesn't change.

Friday, January 19, 2018

In the last post we saw how the log is written during the operation of a database system. In this post we will go over the actual recovery mechanism and how the log is used.

Why recovery?

As we discussed in our first post on this topic, a recovery algorithm ensures that all the changes that were part of a transaction that was not committed are rolled back and all the changes that are part of a transaction that was committed persist even after a crash, restart or error.

What exactly happens in recovery?

There are two processes that are essential for recovery that happen in the database system on an ongoing basis Checkpointing and Write-Ahead Logging.

What is a checkpoint?

A checkpoint is a snapshot of the DBMS state. By taking checkpoints periodically the work done during a recovery can be reduced. A begin_checkpoint record is written to indicate start of checkpoint. An end_checkpoint record is written consisting of transaction table and dirty page table and appended to the log. After the checkpoint process is complete a special master record is written containing LSN of the begin_checkpoint log record. When system comes back from the crash the restart process begins by locating the most recent checkpoint.

What is write ahead logging?

We saw this already in the last post, what it essentially means is that any change to database is first recorded in the log and record in the log must be written to stable storage (disk) before the change to the database is written to the disk.

Recovery proceeds in three phases - Analysis, Redo and Undo.

During Analysis the system determines which transactions were committed and which were not and essentially collects information about all transactions that were active (not committed or rolled back) at the time of the crash. Redo retraces or re-does all actions of the system and brings it back to the state that it was at the time of the crash. This is followed by the Undo stage

We will take a look at what happens in each phase in detail next.

Lets continue from the example in our last post,

LSN

prevLSN

transID

type

PageID

Length

Offset

Before

After

1

-

T1

update

P500

3

21

ABC

DEF

2

-

T2

update

P600

3

HIJ

KLM

3

2

T2

update

P500

3

20

GDE

QRS

4

1

T1

update

P505

3

TUV

WXY

5

3

T2

commit

-

-

-

-

-

Let us assume that T1 changes NOP to ABC on page P700, writes to the log record.

LSN

prevLSN

transID

type

PageID

Length

Offset

Before

After

1

-

T1

update

P500

3

21

ABC

DEF

2

-

T2

update

P600

3

HIJ

KLM

3

2

T2

update

P500

3

20

GDE

QRS

4

1

T1

update

P505

3

TUV

WXY

5

3

T2

commit

-

-

-

-

-

6

4

T1

update

P700

3

NOP

ABC

Let us look at a scenario when the system crashes before the last log record is written to stable storage.

What exactly happens in the Analysis phase?

The analysis phase begins by examining the most recent checkpoint, and initializing the dirty page table and transaction table to copies of those structures in the end_checkpoint records. The analysis then scans the log forward till it reaches the end of the log.

If a log record other than an end record for T is encountered an entry for T is added to transaction table if not already there. Entry for T is modified and LastLSN field of T is set to this record.

If an end log record for transaction T is encountered, T is removed from transaction table because it is no longer active.

If the log record is a commit record the status is set to commit, C otherwise to U indicating it needs to be undone

If a redo log record affecting page P is encountered and P is not in dirty page table an entry is added to dirty page table

At end of analysis table transaction table contains all transactions that were active at time of the crash.In our scenario, let us assume the checkpoint was done at the beginning when both the transaction and dirty page table were dirty and analysis starts from the first log record.Looking at LSN 1

1

-

T1

update

P500

3

21

ABC

DEF

Transaction Table

transID

lastLSN

1

1

lastLSN - LSN of the most recent log record belonging to the transaction.Dirty Page Table

pageID

recLSN

P500

1

recLSN - LSN of the first log record that caused the page to become dirty

Looking at LSN 2

2

-

T2

update

P600

3

HIJ

KLM

Transaction Table

transID

lastLSN

1

1

2

2

Dirty Page Table

pageID

recLSN

P500

1

P600

2

Looking at LSN 3

3

2

T2

update

P500

3

20

GDE

QRS

Transaction Table

transID

lastLSN

1

1

2

3

Dirty Page Table

pageID

recLSN

P500

1

P600

2

Looking at LSN 4

4

1

T1

update

P505

3

TUV

WXY

Transaction Table

transID

lastLSN

1

4

2

3

Dirty Page Table

pageID

recLSN

P500

1

P600

2

Looking at LSN 5

5

3

T2

commit

-

-

-

-

-

Transaction Table

transID

lastLSN

1

4

Dirty Page Table

pageID

recLSN

P500

1

P600

2

P505

4

Since the system crashed before log record 6 could be written to stable storage, the log record is not read in the Analysis phase at all. This is the state of transaction table and dirty page table at the end of the analysis phase.

What exactly happens in the Redo phase?

During the redo phase the system applies updates of all transactions committed or otherwise. If a transaction was aborted before the crash and its updates were undone, as indicated by compensation log records (CLRs), the actions described in CLRs are also reapplied.

The Redo phase starts with the smallest LSN in the dirty page table that was constructed in the Analysis phase. This LSN refers to the oldest update that may not have been written to the disk prior to the crash. Starting from this LSN, redo scans forward till it reaches the end of the log.

For each log record (update or CLR), redo checks the dirty page table

If the page is not in dirty page table the log record is ignored as this means all the changes to the page have been already written to the disk.

If the page is in the dirty page table but the recLSN (the LSN that made the page dirty) is greater than LSN of the record being checked the record is ignored. This means that the change was written to disk and a latter LSN made the page dirty.

It then retrieves the page and checks the most recent LSN on the page (pageLSN), if this is greater than or equal to LSN of the record being checked the record is ignored as this means the page already contains the changes from the LSN being checked.

For all other cases, the log action is redone, whether it is an update record or a CLR (records written during rollback / abort) The logged action is reapplied and the pageLSN on the page is set to the LSN of the redone record. No additional log record is written.

Considering the transaction table and the dirty page table at the end of our analysis phase in our example

Transaction Table

transID

lastLSN

1

4

Dirty Page Table

pageID

recLSN

P500

1

P600

2

P505

4

Redo phase starts with smallest LSN is dirty page table which is 1 and scans forward from the log.

Looking at LSN 1

1

-

T1

update

P500

3

21

ABC

DEF

P500 is in the dirty page table, and recLSN which is 1, is equal to the LSN that is being checked. Therefore the system retrieves the page. It checks the pageLSN, which is less than 1 and therefore decides the action must be redone. It changes ABC to DEF.

Looking at LSN 2

2

-

T2

update

P600

3

HIJ

KLM

P600 is in the dirty page table and recLSN is 2, which is equal to the LSN being checked. Therefore the system retrieves the page. It checks pageLSN (3) which is greater than the LSN being checked and therefore does not need to redo the update (Remember, T2 was committed?)

Looking at LSN 3

3

2

T2

update

P500

3

20

GDE

QRS

This is the same case as the previous one and no redo is necessary.Looking at LSN 4

4

1

T1

update

P505

3

TUV

WXY

P505 is in the dirty page table. The pageLSN is less than the LSN being checked and therefore the update needs to be redone. It changes TUV to WXY

Looking at LSN 5

5

3

T2

commit

-

-

-

-

-

Since this is not an update or CLR record no action needs to be done. At the end of the redo phase, an end record is written for T2.

What exactly happens in the Undo phase?

The aim of the undo phase is to undo the actions of all transactions that were active at the time of the crash, effectively aborting them. The Analysis phase identifies all transactions that were active at the time of the crash, along with their most recent LSN. (lastLSN) All these transactions must be undone in the reverse order of which they appear in the log. Therefore undo starts from the largest i.e. most recent LSN from the transactions to be undone. For each log record

If the record is a CLR and the undoNextLSN values is not null, the undoNextLSN values is added to the logs records to undo. If the undonextLSN is null an end record is written for the transaction because it is completely undone and the CLR record is discarded.

If the record is an update a CLR is written and the corresponding action is undone just as if the system were doing a rollback and prevLSN in the update record is added to the set of records to be undone.

When the set of records to be undone is completely empty the undo phase is complete.

Once the undo phase is complete, the system is said to be “recovered” and can proceed with normal operations.

In our example, the transaction table from Analysis phase is -

Transaction Table

transID

lastLSN

1

4

The undo phase starts from log record with LSN 4 and creates a set of actions to undo

Looking at LSN 4

4

1

T1

update

P505

3

TUV

WXY

WXY is changed to TUV

Set to Undo - {1} which is the prevLSNA CLR is added

6

4

T1

CLR

P505

3

WXY

TUV

1

where 1 is the undoNextLSN

Looking at LSN 1

1

-

T1

update

P500

3

21

ABC

DEF

DEF is changed to ABCSet to Undo - {} since prevLSN is nullA CLR is added

7

6

T1

CLR

P500

3

DEF

ABC

-

Since the set of actions to be undone is zero, the undo is complete. T1 is removed from the transaction table. A checkpoint is taken and the system is in a recovered state and can proceed with normal operation.

What happens if a system crashes during crash recovery?

If there is a crash during crash recovery, the system can still do recovery as for every update that was undone, a CLR is written and the system needs to just redo the CLRs. This is why CLRs are an important part of recovery.

In our example if the system crashed after the change from WXY to TUV was done but before the change from DEF to ABC was done, when the system is in recovery state again, it would see the CLR for the WXY to TUV change in the redo phase and redo or repeat the change. The change from DEF to ABC would be done as a part of undo during the latter/second recovery.

In the next and hopefully last post on recovery, I’ll try to look at MySQL logs and source code related to the recovery component and see some of these things in action.