With support of multi-threads replication starting from MySQL
5.7, the operations on slave are slightly different from
single-thread replication. Here is a list of some operation tips
for the convenience of use as below:

1. Skip a statement for a specific channel.

Sometimes, we might find out that one of the channels stop
replication due to some error, and we may want to skip the
statement for that channel so that we can restart a slave for it.
We need to be very careful not to skip the statement from the
other channel, since the command SET GLOBAL
sql_slave_skip_counter = N is for global. How can we make sure
the global sql_slave_skip_counter is applied to a specific
channel and not to the other channel? Here are the steps:

1.1: Stop all slaves by: stop slave;

stop slave;

1.2: Set up the count of statement to skip by: SET GLOBAL
sql_slave_skip_counter = N;

Whenever we do upgrades for our clients from one major version of
MySQL to another we strongly recommend to test in two forms.

First, it would be a performance test between the old version and
the new version to make sure there aren’t going to be any
unexpected issues with the query processing rates. Secondly, do a
functional test to ensure all queries that are running on the old
version will not have syntactic errors or problems with reserved words in the new version that we’re
upgrading to.

If a client doesn’t have an appropriate testing platform to
perform these types of tests, we will leverage available tools to
test to the best of our ability. More often than not this
includes using pt-upgrade after capturing slow logs with
…

One of the more common struggles I’ve had to assist with in
regard to Amazon RDS is enabling binary logging on read replicas,
or forming multi-tier replication in instances using version 5.6
or later after seeing that multi-tier replication is not
supported in version 5.5 (for a reason that will become clear by
the end of this post.)

First off, let’s have a look at the topology that I have in place
in my AWS account. As you’ll see below I have a master, blog1,
and a read replica that I created via the AWS console called
blog2. You’ll also notice that, despite being supported, if I
select instance actions while having blog2 highlighted the option
to create a read replica is grayed out.

Further, if we use the MySQL CLI to connect to blog2 and check
the global variables for log_bin and binlog_format, you’ll see
that binary logging is off and binlog_format is set to statement.
This is strange considering that the parameter …

On April 4th 2012 Xtrabackup 2.0 was released in to GA by
Percona along with a new streaming feature called xbstream. This new tool allowed for compression and parallelism of streaming backups when running
xtrabackup or innobackupex without having to stream using tar,
then pipe to gzip or pigz, then pipe to netcat or socat to stream
your backup to the recipient server. This resulted in …

High availability for MySQL has become increasingly relevant
given the ever increasing rate of adoption and implementation.
It’s no secret to anyone in the community that the popularity of
MySQL has become noteworthy. I still remember my start with MySQL
in the early 5.0 days and people told me that I may not want to
consider wasting my time training on a database that didn’t have
a large industry adoption, but look at where we are now! One of
my favorite pages to cite when trying to exhibit this fact is the
db-engines.com ranking trend page where we can see
that MySQL is right up there and contending with enterprise
products such as Microsoft SQL Server and Oracle.

MySQL has gone from being part of the ever famous LAMP stack for
users looking to set up their first website to seeing adoption
from major technical players such as …

Having good historial metrics monitoring in place is critical for
properly operating, maintaining and troubleshooting database
systems, and Percona Monitoring and Management is one of
the options we recommend to our clients for this.

One common concern among potential users is how using this may
impact their database’s performance. As I could not find any
conclusive information about this, I set out to do some basic
tests and this post shows my results.

To begin, let me describe my setup. I used the following Google
Cloud instances:

One 4 vCPU instance for the MySQL server

One 2 vCPU instance for the sysbench client

One 1 vCPU instance for the PMM server

I used Percona Server 5.7 and PMM 1.5.3 installed via Docker.
Slow query log was enabled with long_query_time set to 0 …

While investigating alternatives to migrate to Google Cloud SQL,
I encountered a lack of support for external masters. However,
it’s possible to overcome this limitation by replicating into
Google Cloud SQL using Tungsten replicator.

Cloud SQL is Google’s database-as-a-service solution, similar to
RDS for Amazon Web Services. You can get a fully managed database
in only a few clicks (or API calls). At the time of writing this,
the only supported databases are MySQL and Postgres.

Cloud SQL alternatives

Google offers two different options for MySQL deployments.

1st generation instances:

Only MySQL versions 5.5 and 5.6 can provisioned

Max memory is limited to 16 Gb

Max of 250 Gb storage (up to 500 Gb with Silver or higher
support package)

Recently there have been several issues reported to me about DDL
activity causing MySQL crash scenarios. In one case it stemmed
from dropping multiple databases, one after the other in rapid
succession. But in the case that I was recently dealing with
directly, where we were upgrading to MySQL 5.7, it was the result
of mysql_upgrade running an ALTER TABLE FORCE on a 2.2Tb table in
order to convert it to the new microsecond precision supporting
data format.

The issue occurred after the intermediate table had been
completely filled with all the necessary data and right when
MySQL would swap out the existing table for the intermediate.
After a period of time MySQL crashed and the following InnoDB
monitor output was found in the error log.

2017-11-19T00:22:44.070363Z 7 [ERROR] InnoDB: The
age of the last checkpoint is 379157140, which exceeds the log
group capacity 377483674.
InnoDB: …

Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.