Starting with PMM 1.13, PMM uses Prometheus 2 for
metrics storage, which tends to be heaviest resource consumer of
CPU and RAM. With Prometheus 2 Performance Improvements, PMM can
scale to more than 1000 monitored nodes per instance in default
configuration. In this blog post we will look into PMM scaling
and capacity planning—how to estimate the resources required, and
what drives resource consumption.

We have now tested PMM with up to 1000 nodes, using a virtualized
system with 128GB of memory, 24 virtual cores, and SSD storage.
We found PMM scales pretty linearly with the available memory and
CPU cores, and we believe that a higher number of nodes could be …

In this instance, I want to show the data in different
dimensions, primarily to answer questions around how throughput
scales with increasing IOPS.

A recap: for the test I use Amazon instances and Amazon gp2 and
io1 volumes. In addition to the original post, I also tested two
gpl2 volumes combined in software RAID0. I did this for the
following reason: Amazon cap the single gp2 volume throughput to
160MB/sec, and as we will see from the charts, this limits InnoDB
performance.

Also, a reminder from the previous post: we can increase gp2 IOPS
by increasing volume size (to the top limit 10000 IOPS), and for
io1 we can increase IOPS by paying per additional IOPS.

Congratulations to Duo Security, which announced that it is to be
acquired by Cisco Systems for $2.35b. This is
a great outcome for all involved, and I'm very proud of what the
team has accomplished.

I worked with Duo for about three years, initially as an advisor
and ultimately as Chief Operating Officer running Sales,
Marketing, Products, Engineering and Services. I helped grow the
company from around $7m in Annual Recurring Revenue (ARR) to
about $100m. The company has continued to grow to 700 employees,
12,000 customers and revenues that I estimate could exceed
$200m ARR by year end, based on prior published numbers.

Percona is known as the MySQL performance experts. With over 4,000
customers, we’ve studied, mastered and executed many different
ways of scaling applications. Percona can help ensure your
application is highly available. Come learn from our playbook,
and leave this …

On Thursday (19th June), Mats Kindahl and I will be presenting a
free webinar on why and how you should be using MySQL Fabric to
add Sharding (scaling out reads & writes) and High
Availability to MySQL. This product has only recently gone GA and
so this is a good chance to discover it’s for you and to get your
questions answered by the people who wrote the software! All you
need to do is register for the MySQL Fabric webinar here.

Abstract

MySQL Fabric is built around an extensible and open source
framework for managing farms of MySQL Servers. Currently two
features have been implemented – High Availability (built on top
of MySQL Replication) and scaling out using …

MySQL Fabric is a new framework that adds High Availability (HA)
and/or scaling-out for MySQL. This is the second in a series of
posts on the new MySQL Fabric framework; the first article
(MySQL Fabric – adding High Availability to
MySQL) explained how MySQL Fabric can deliver HA and then
stepped through all of the steps to configure and use it.

This post focuses on using MySQL Fabric to scale out both reads
and writes across multiple MySQL Servers. It starts with an
introduction to scaling out (by partitioning/sharding data) and
how MySQL Fabric achieves it before going on to work through a
full example of configuring sharding across a farm of MySQL
Servers together with the code that the application developer
needs to …

Following on from my post about MySQL Cluster sessions at the forthcoming
Connect conference, its now the turn of MySQL
Replication - another technology at the heart of scaling and high
availability for MySQL.

Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.