In this blog post I’ll look at the performance of Percona XtraDB Cluster on AWS using different
service instances, and recommend some best practices for
maximizing performance.

You can use Percona XtraDB Cluster in AWS environments. We often
get questions about how best to deploy it, and how to optimize
both performance and spend when doing so. I decided to look into
it with some benchmark testing.

In this blog post, I will run a gh-ost benchmark
against the performance of pt-online-schema-change.

When gh-ost came out, I was very excited. As MySQL ROW
replication became commonplace, you could use it to track changes
instead of triggers. This practice is cleaner and safer compared to Percona
Toolkit’s pt-online-schema-change. Since gh-ost doesn’t
need triggers, I assumed it would generate lower overhead and
work faster. I frequently called it “pt-online-schema-change on
steroids” in my talks. …

The purpose of the benchmark is to see how these
three solutions work on a single big server, with many CPU
cores and large amounts of RAM. Both systems are massively parallel (MPP) database systems, so
they should use many cores for SELECT queries.

This blog compares how PostgreSQL and MySQL handle millions of
queries per second.

Anastasia:Can open source
databases cope with millions of queries per second? Many open
source advocates would answer “yes.” However, assertions aren’t
enough for well-grounded proof. That’s why in this blog post, we
share the benchmark testing results from Alexander Korotkov (CEO
of Development, Postgres Professional) and Sveta Smirnova
(Principal Technical Services Engineer, Percona). The comparative
research of PostgreSQL 9.6 and MySQL 5.7 performance will be
especially valuable for environments with multiple databases.

The idea behind this research is to provide an honest comparison
for the two popular RDBMSs. Sveta and Alexander wanted to test
the most recent versions of both MySQL and PostgreSQL with the
same tool, under the same challenging …

MySQL @ Facebook RocksDB appears to store at least 2x the size of
the volume of changes in a transaction. I don't know how much
space for the row + overhead there is in each transcation, so I'm
just going to say 2x the raw size of the data changed in the
transaction, as approximation. I am not sure how this works for
updates either, that is whether old/new row information is
maintained. If old/row data is maintained, then a pure update
workload you would need 4x the ram for the given transactional
changes. My bulk load was 12GB of raw data, so it failed as I
have only 12GB of RAM in my test system.

The workaround (as suggested in the bug) is to set two
configuration …

MySQL @ Facebook RocksDB appears to store at least 2x the size of
the volume of changes in a transaction. I don't know how much
space for the row + overhead there is in each transcation, so I'm
just going to say 2x the raw size of the data changed in the
transaction, as approximation. I am not sure how this works for
updates either, that is whether old/new row information is
maintained. If old/row data is maintained, then a pure update
workload you would need 4x the ram for the given transactional
changes. My bulk load was 12GB of raw data, so it failed as I
have only 12GB of RAM in my test system.

The workaround (as suggested in the bug) is to set two
configuration …

The Netflix member experience is offered to 83+ million global
members, and delivered using thousands of microservices. These
services are owned by multiple teams, each having their own build
and release lifecycles, generating a variety of data that is
stored in different types of data store systems. The Cloud
Database Engineering (CDE) team manages those data store systems,
so we run benchmarks to validate updates to these systems,
perform capacity planning, and test our cloud instances with
multiple workloads and under different failure scenarios. We were
also interested in a tool that could evaluate and compare new
data store systems as they appear in the market or in the open
source domain, determine their performance characteristics and
limitations, and gauge whether they could be used in production
for relevant use cases. For these purposes, we wrote Netflix Data
Benchmark …

Saturday I was in my favorite grocery store, standing in the
line, browsing the net on my phone. I read Vadim Tkachenko‘s blog
post about Measuring Percona Server Docker CPU/network overhead
and his findings were opposite than mine – he didn’t found any
measurable difference. Reading his post, he did found huge impact
in networking […]

Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.