Welcome to another post in the series of
Percona
Live featured session blogs! In these blogs, we’ll
highlight some of the session speakers that will be at
this year’s Percona Live conference. We’ll also discuss how these
sessions can help you improve your database environment. Make
sure to read to the end to get a special Percona Live 2017
registration bonus!

MySQL’s Performance schema is a relatively new tool for measuring
performance and MySQL Workbecn 6.1.2 is the latest beta of that
software. I have not had a lot of time to play with performance
schema but now I am taking my first steps with the help of
Workbench. Startup Workbench and you will find under the
Navigator an item labeled Performance Schema
Setup. Flip the toggle from OFF to ON and then start
exploring.

InnoDB Buffer stats by Schema are show here — one of more than
twenty pre-established metrics available.

Now you can run queries and see what the costs are, where the
server is waiting, or what indexes remain unused. You can even
use workbench to alter the options file to setup other
Performance Schema instruments. Trying various settings for
optimizer_search_depth is simple with the …

Few weeks ago I asked my friends who speak both English and
Russian if it is worth translating slides about Performance
Schema which I prepared for a seminar at Devconf 2013. They
said it is. Today I finished translation and uploaded slides to
SlideShare.

Strictly speaking simple translation of slides is not enough,
because they were created for the seminar where I was going to
explain what they mean. I think I need to repeat same seminar,
this time in English language. But if you have rough imagination
about what Performance Schema is and need hints for practical use
you will find such suggestions in the slides. You will also find
ready-to-use queries which you can use to troubleshoot most
frequent performance issues.

In the article about the role of a primary key, I mentioned that a
secondary index in an InnoDB table consists not only of the
values of its member columns, but also values of the table’s
primary key are concatenated to the index. I.e. the
primary key contents is part of every other index.

MySQL performance is largely defined by keys and how efficiently
queries can use them. As you scale, at certain point it isn’t
enough anymore to just have any indexes and still get a
good performance in return. You have to really figure them out
and allow your queries to do less work, as little work as
possible.

The approach presented in this article can sometimes help
designing such good, efficient indexes. As a consultant, I have
to rely on it myself from time to time, having to optimize a
query that works in a database I know nothing about.

Let’s assume there is an application, which collects user
activity in various places. The application uses a poorly indexed
database, so there are plenty of examples to choose from. Our
example query performs a full table scan, which means it reads
all rows from the table it uses. It is also among the most
popular statements executed by application.

In my previous post, I discussed scaling web
database performance in MySQL Cluster using auto-sharding and
active/active geographic replication - enabling users to scale
both within and across data centers.

I also mentioned that while scaling write-performance of any web
service is critical, it is only 1 of multiple dimensions to
scalability, which include:

- The need to scale operational agility to keep pace with demand.
This means being able to add capacity and performance to the
database, and to evolve the schema – all without downtime;

- The need to scale queries by having flexibility in the APIs
used to access the database – including SQL and NoSQL interfaces;

- The need to scale the database while maintaining continuous
availability.

Today I have released common_schema, a utility schema for MySQL which
includes many views and functions, and is aimed to be installed
on any MySQL server.

What does it do?

There are views answering for all sorts of useful information:
stuff related to schema analysis, data dimensions, monitoring,
processes & transactions, security, internals... There are
basic functions answering for common needs.

Some of the views/routines simply formalize those queries we tend
to write over and over again. Others take the place of external
tools, answering complex questions via SQL and metadata. Still
others help out with SQL generation.

An InnoDB table must have a primary key (one is created if you
don’t do it yourself). You may have a natural
key at hand. Stop! Allow me to suggest an AUTO_INCREMENT may
be better.

Why should one add an AUTO_INCREMENT PRIMARY KEY on a table on
which there’s a natural key? Isn’t an AUTO_INCREMENT a pseudo
key, meaning, it doesn’t have any explicit relation to the row
data, other than it is a number and unique?

Yes, indeed so. Nevertheless, consider:

Natural keys are many times multi-columned.

Multi column PRIMARY KEYs make for larger keys, and make for
bloated secondary keys as well. You may be wasting space for
storing the additional AUTO_INCREMENT column, but you may gain
space back on secondary keys.

Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.