Enterprises require high availability for their
business-critical applications. Even the smallest unplanned
outage or even a planned maintenance operation can cause lost
sales, productivity, and erode customer confidence. Additionally,
updating and retrieving data needs to be robust to keep up with
user demand.

Let’s take a look at how Tungsten Clustering helps enterprises
keep their data available and globally scalable, and compare it
to Amazon’s RDS running MySQL (RDS/MySQL).

Replicas and Failover What does RDS do?

Having multiple copies of a database is ideal for high
availability. RDS/MySQL approaches this with “Multi-AZ”
deployments. The term “Multi-AZ” here is a bit confusing, as
enabling this simply means a single “failover
replica” will be created in a different availability zone from
the primary database instance. …

Tungsten Clustering is an extraordinarily flexible tool, with
options at every layer of operation.

In this blog post, we will describe and discuss the two different
methods for installing, updating and upgrading Tungsten
Clustering software.

When first designing a deployment, the question of installation
methodology is answered by inspecting the environment and
reviewing the customer’s specific needs.

Staging Deployment Methodology

All for One and One for All

Staging deployments were the original method of installing
Tungsten Clustering, and relied upon command-line tools to
configure and install all cluster nodes at once from a central
location called the staging server.

This staging server (which could be one of the cluster nodes)
requires SSH access to all …

Your database cluster contains your most
business-critical data and therefore proper performance under
load is critical to business health. If response time is slow,
customers (and staff) get frustrated and the business suffers a
slow-down.

If the database layer is unable to keep up with demand, all
applications can and will suffer slow performance as a result.

To prevent this situation, use load tests to determine the
throughput as objectively as possible.

In the sample load.pl script below, increase load by
increasing the thread quantity.

You could also run this on a database with data in it without
polluting the existing data since new test databases are created
to match each node’s hostname for uniqueness.

In a previous post we went into detail about how to
implement Tungsten-specific checks. In this post we will focus on
the other standard Nagios checks that would help keep your
cluster nodes healthy.

Your database cluster contains your most business-critical data.
The slave nodes must be online, healthy and in sync with the
master in order to be viable failover candidates.

This means keeping a close watch on the health of the databases
nodes from many perspectives, from ensuring sufficient disk space
to testing that replication traffic is flowing.

A robust monitoring setup is essential for cluster health and
viability – if your replicator goes offline and you do not know
about it, then that slave becomes effectively useless because it
has stale data.

The Player Accounts team at Riot Games needed to consolidate the
player account infrastructure and provide a single, global
accounts system for the League of Legends player base. To do
this, they migrated hundreds of millions of player accounts into
a consolidated, globally replicated composite database cluster in
AWS. This provided higher fault tolerance and lower latency
access to account data. In this talk by Tyler Turk (Infrastructure
Engineer, Riot Games), we discuss this effort to migrate eight
disparate database clusters into AWS as a single composite
database cluster replicated in four different AWS regions,
provisioned with terraform, and managed and operated by Ansible. …

Your database cluster contains your most business-critical data.
The slave nodes must be online, healthy and in sync with the
master in order to be viable failover candidates.

This means keeping a close watch on the health of the databases
nodes from many perspectives, from ensuring sufficient disk space
to testing that replication traffic is flowing.

A robust monitoring setup is essential for cluster health and
viability – if your replicator goes offline and you do not know
about it, then that slave becomes effectively useless because it
has stale data.

Big Brother is Watching You! The Power of Nagios

Even while you sleep, your servers are busy, and you simply
cannot keep watch all the time. Now, more than ever, with global
deployments, it is literally impossible to watch everything all
the time.

Enter Nagios, you best big brother ever. As a long-time player in
the monitoring market, Nagios has both …

Continuent Clustering support true distributed multimaster
clustering. In this topology, there are cross-site replicator
services for each remote site. In a 3-site configuration, there
are a total of 9 replication streams to manage.

Continuent Clustering also offers a graphical administration tool
called the Tungsten Dashboard to help with your management
burden. The GUI makes the deployment much easier to visualize and
administer.

For our example, we will have a Composite Multimaster dataservice
called global with three active, writable member
clusters (one per site), east,
west and north.

Dashboard Summary View

In the summary, collapsed view, the composite service and all
member clusters are listed with associated information and
controls. Note that the Type for the composite dataservice
global is CompMM …

Watch the relay of this webinar and learn how
Bluefin Payment Systems provides 24/7/365 operation and
application availability for their PayConex payment gateway and
Decryptx decryption-as-a-service, essential to point-of-sale
(POS) solutions in retail, mobile, call centers and kiosks.

We discuss why Bluefin uses Continuent Clustering, and how
Bluefin runs two co-located data centers with multimaster
replication between each cluster in each data center, with full
failover within the cluster and between clusters, handling 350
million records each month.

Did you know that Continuent Clustering supports having clusters
at multiple sites world-wide with either active-active or
active-passive replication meshing them together?

Not only that, but we support a flexible hybrid model that allows
for a blended architecture using any combination of node types.
So mix-and-match your highly available database layer on bare
metal, Amazon Web Services (AWS), Azure, Google Cloud, VMware,
etc.

In this article we will discuss using the Active/Passive model to
scale reads worldwide.

The model is simple: select one site as the Primary where all
writes will happen. The rest of the sites will pull events as
quickly as possible over the WAN and make the data available to
all local clients. This means your application gets the best of
both worlds:

Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.