Historically I do not care much about MariaDB's MaxScale, at least since I know how to build it from source when needed. But,
as a support engineer who work at MariaDB, sometimes I have to
deal with problems related to MaxScale, and this week it happened
so that I had to explain what to do to implement automatic
restarts of MaxScale "daemon" in case of crashes on RHEL
6.x.

In the process I had found out that two of my most often used
Linux distributions, CentOS 6.x and Ubuntu 14.04, actually use
Upstart, so good old System V's init tricks
and scripts work there only partially and only because somebody
cared to integrate them into this "new" …

This week I've got a question that sounded basically like
this:
"Is it possible to just copy the entire partition from the
replicated server?"Let me share some background story. As it
happens sometimes, user had a huge table with many partitions,
let's say hundreds of gigabytes in size each, and one of them got
unfortunately corrupted. It happened in a replication setup on
master, but lucky they were, they had used
innodb_file_per_table=1 and they had a slave that was more
or less in sync with master. This allowed to reconfigure
replication and continue to work, but the task remained to
eventually put master back in use and get correct data in the
corrupted partition. Let's assume that dumping and reloading data
from one of instances in replication setup is not a desired
option, as it will take too much time comparing to just copying
the partition tablespace file. Hence the question above...Side note: Let's assume …

Adding good content to Twitter can be a pain. I can’t do it
during working hours, and I don’t have much time at night. But,
the more content you have, the more followers you can gain, and
the more your original tweets can be seen (hopefully).…

In two earlier posts, I gave some examples on how to use Perl
to send tweets stored in a MySQL database to
Twitter, and then how to automatically reply to your retweets with a
“thanks”. In this post, I will show you how to automatically
download your direct messages from Twitter, store the messages in
a MySQL database, and then delete them.…

In this post we will use the MySQL 5.7.7 labs
release which has support for JSON documents stored in a special data type. We
will be using Connector/Python and show how to get going with
updating documents and getting data out.

Required read

If you wonder what this is all about, please check the following
great reads from the MySQL Server team:

Join 28,000 others and follow Sean Hull on twitter @hullsean.
MySQL slow query on RDS If you run MySQL as your backend
datastore is there one thing you can do to improve performance
across the application?. Those SQL queries are surely key. And
the quickest way to find the culprits is to regularly analyze
your […]

There may be times when you need to create a new table
in MySQL and feed it with data from another database,
the Internet or from combined data sources. MS Excel is
commonly used as the bridge between those data sources and a
target MySQL database because of the simplicity it offers
to organize the information to then just dump it into a new
MySQL table. Although the last bit sounds trivial, it
may actually be a cumbersome step, creating ODBC
connections within Excel through Microsoft Query
may not help since these are normally created to extract data
from MySQL into Excel, not the opposite. What if
you could do this in a few clicks from within Excel after
making your data ready for export to a
MySQL database?

With MySQL for Excel you can do this and this guide
will teach you how easy it is.

Content reproduced on this site is the property of the respective copyright holders.
It is not reviewed in advance by Oracle and does not necessarily represent the opinion
of Oracle or any other party.