Microsoft cloud platform allows you to provide resiliency for your on-premises SQL Server deployments. Marcin Policht describes this functionality, available as part of the Azure Site Recovery solution.

When you think of Cloud services for database administration, Database as a Service (DBaaS) is what tends to comes to mind, but one of the ways that the Cloud has had an enormous impact on our work productivity is in the area of collaboration.

Introduction to MySQL Troubleshooting Webinar: Q & A for How to Create a Test Setup

MariaDB is a community-developed fork of MySQL and aims to be an enhanced, drop-in replacement for it.

The MySQL binary package have been growing a lot the last GA versions.

In a recently recorded webinar, How To Deliver Faster, Better Software Releases with DevOps, Pythian’s Chief Digital Officer, Aaron Lee and guest speaker Amy DeMartine, Senior Research Analyst at Forrester Research, discussed DevOps and the important role it plays in the race to deliver the best customer experience with faster and better software releases.

One of the webinar participants asked about the critical success factors when implementing a rapid release software model.

The case for adopting agile operations is very compelling. Rapid release cycles can reduce costs, provide better and more stable solutions, increase customer and employee satisfaction, and most importantly, outpace competition and drive revenue.

Driving both efficiency and effectiveness of operations is easier said than done. DeMartine highlighted 7 habits of effective DevOps, but stressed that culture is the number one enemy.

She said, “this fear of change is a huge cultural gap. DevOps is at the point of cultural change, and this is one of the biggest ones that IT has to break”.

Historically, IT teams have held large and lengthy change boards, requiring a strong reliance on rigid approval processes. Combine that history with the complexity and degree of unknowns that operations faces, and it isn’t surprising that some teams are risk averse and consensus driven.

While new technologies, processes, and automation are critical to implementing rapid release models and enhancing DevOps, both DeMartine and Lee agreed that it boils down to the people, and the way Dev and Ops interact with each other.

“It’s clear what the responsibilities of a product manager are. It’s clear where the developers responsibilities start and stop,” said Lee. Often this leads to operations “picking up everything else” and facing an increased amount of unknowns.

Part of the solution is understanding how to embrace and maximize the opportunity for change, as opposed to minimizing the occurrence of change.

Most importantly, Dev and Ops need to know and understand each other. By setting and balancing clear boundaries and expectations, each team can understand the role the other is playing, and begin to see the environment under a common goal.

It looks like Hortonworks recently decided to stop hosting a version of their Sandbox VM for Windows Hyper-V. I only see VirtualBox and VMware versions listed.

What if, like me, your primary learning lab machine runs Hyper-V?

Well, you can convert it fairly easily. My method is to use VirtualBox to do this.

I run VirtualBox on my Mac because it’s free, it has free conversion tools and I usually only run 1-2 VMs on it, but my Mac isn’t my learning lab. This tip WILL work on a Windows machine that has VirtualBox installed.

Note that VirtualBox and Hyper-V may not get along well if installed on the same device, hence my using two machines to do this.

This process runs for a bit and creates a copy in VHD format, which you can copy onto, and run from; any Hyper-V machine.

Simply create a new Hyper-V machine, as you normally would, but instead of creating a new disk, choose this one and fire it up.

On the subject of VM Config, you should give it access to your internal network so that you can access it via browser, a couple processors and on memory, a word of caution: when I did this with dynamic memory enabled, the VM took all of my available system memory, so you may want to limit consumption to a number that reserves some computing power for the host and any other VMs you may want to run in parallel.

After mounting and starting my new Hyper-V, VM I found that I hadn’t allocated enough RAM or processor and it was “dying” on boot so I upped the RAM to 6 GB and 4 processors from 2 GB and 1 respectively.

Next up, eth0 wasn’t found on boot so I checked what Google had to say and found this article.

I edited the first file, and upon checking the second (/etc/sysconfig/network-scripts/ifcfg-eth0) I found that the MAC address was not recorded so didn’t have anything to do.

I saved, rebooted, and watched and eth0 was found at this time – of course the VirtualBox add-in failed at boot, but that isn’t a big deal.

When the VM came up, it instructed me to connect to 127.0.0.1:8888 which didn’t work. I looked up the IP assigned by my router put that IP (without a port) into my browser and was able to connect without issue.

March 2, 2016 was officially the midpoint of the Technovation Challenge in Ottawa. The 2016 competition started on Sunday January 17, at Carleton University, where Anar Simpson, Global Ambassador for Technovation, kicked off the program.

Technovation is a global technology entrepreneurship competition for young women that sets out to prove that there’s more than just coding in the technology sector. The program is designed to inspire and educate young women to pursue a career in technology by showing them all aspects of starting a technology business.

Regional Technovation Chapters contact local high schools to introduce the program and recruit teams of high school girls. Thanks to the efforts of Jennifer Francis, chair of Women Powering Technology, an Ottawa Chapter of Technovation that started up in January 2015. The pilot was such a success, participation in 2016 has doubled with over 100 high school and middle school girls participating and 30 female mentors from the Ottawa tech sector.

In addition to IBM, Shopify, and L-Spark, Pythian is a proud sponsor of the 2016 competition. Having just announced the Pythia program , it was a natural fit for Pythian to sponsor Technovation. The Pythia program focuses on increasing the percentage of talented women who work at Pythian, especially in tech roles. It also encourages and supports the participation of girls and women in STEM fields, which is exactly what Technovation is all about.

The support of the sponsors allows the teams to meet weekly at the sponsor’s facilities. Here the teams, along with their mentors, guest speakers and instructor from Carleton University’s Technology Innovation Management (TIM) program, focus on identifying a problem facing their community, creating an App to solve the problem, code the App, build a company, and pitch their business plan to experts in the field! It’s pretty impressive considering the high school girls squeeze this competition in on top of their day-to-day school classes and after-school activities. They are all committed and dedicated – a great sign of future leaders!

“My views of working in the technology sector have changed, since it feels like something anyone can be a part of, whereas it was a distant idea before,” said 17-year-old Doris Feng, a student at Merivale High School and member of the team Women With Ambition. “I came in with the notion that we would be coding during the first week, but it turns out much of the development takes place off screen, with many hours dedicated to brainstorming, surveying users, drawing a paper prototype, and mulling over the ideas with team members.”

I couldn’t have said it better Doris! This is exactly what happens in the real world.

Technovation is a program designed to inspire women to pursue the entrepreneurial spirit in all of us. For more information on Technovation and starting your own local chapter, visit Technovation online. Globally, Technovation is sponsored by Adobe Foundation, Google, Verizon, CA Technologies, Intel and Oracle, in partnership with UN Women, UNESCO and MIT Media Lab.

In a previous post (OpenTSDB and Google Cloud Bigtable) we discussed OpenTSDB, an open source distributed database specifically designed for storing timeseries data. We also explained how OpenTSDB relies on Apache HBase for a reliable and scalable data backend. However, deployment and administration of an HBase cluster is not a trivial task, as it requires a full Hadoop setup. This means that it takes a big data engineer (or better a team of them) to plan for the cluster sizing, provision the machines and setup the Hadoop nodes, configure all services and tune them for optimal performance. If this is not enough, Operations teams have to constantly monitor the cluster, deal with hardware and service failures, perform upgrades, backup regularly, and a ton of other tasks that make maintenance of a Hadoop cluster and OpenTSDB a challenge for most organizations.

With the release of Google Bigtable as a cloud service and its support for the HBase API, it was obvious that if we managed to integrate OpenTSDB with Google Bigtable, we would enable more teams to have access to the powerful functionality of OpenTSDB by removing the burden from maintaining an HBase cluster.

Nevertheless, integration of OpenTSDB with Bigtable was not as seamless as dropping a few jars in its release directory. This happened because the OpenTSDB developers went over and above the standard HBase libraries, by implementing their very own asynchbase library. Asynchbase is a fully asynchronous, non-blocking, thread-safe, high-performance HBase API. And no one can put it better than the asynchbase developers themselves who claim that ‘This HBase client differs significantly from HBase’s client. Switching to it is not easy as it requires one to rewrite all the code that was interacting with any HBase API.’

This meant that integration with Google Bigtable required OpenTSDB to switch back to the standard HBase API. We saw the value of such an effort here at Pythian and set about developing this solution.

The asyncbigtable library

Today, we are very happy to announce the release of the asyncbigtable library. The asyncbigtable library is a 100% compatible implementation of the great asynchbase library that can be used as a drop in replacement and enable OpenTSDB to use Google Bigtable as a storage backend.

To create asyncbigtable we had to overcome two great challenges. The first one was that OpenTSDB assumes that the underlying library (until now asynchbase) performs asynchronous and non-blocking operations. On the other hand, the standard HBase API only supports synchronous and blocking calls. As a workaround for this, we used the BufferedMutator implementation that collects all Mutation operations in a buffer and performs them in batches, allowing for mutations with an extremely low latency.

The second challenge stemmed from the fact that the OpenTSDB project has a very limited set of jar dependencies, that are explicitly defined in Makefiles. Contrary to this spartan approach, HBase and Bigtable client libraries have a significant number of transitive dependencies. Since, adding those dependencies one-by-one in the OpenTSDB build process would complicate its dependency management, we decided to package all asyncbigtable dependencies in an uber-jar using the Maven assembly plugin. Therefore, building OpenTSDB with asyncbigtable support is now as simple as downloading a single beefy jar.

Build stepsBefore you start

Before you build OpenTSDB with Google Bigtable support, you must complete the following required steps:

By all means our work on asyncbigtable does not stop here. We are putting great effort towards improving the library to achieve the high quality standards of the rest of OpenTSDB code. Our first priority is to test the library against most real world scenarios and achieve the highest quality. In the future, we plan to benchmark the performance of OpenTSDB with Bigtable and compare how it competes against HBase.

We are also working on building a true asynchronous implementation of the asyncbigtable library by integrating deeper with the Google Bigtable API.

Acknowledgements

We would like to thank the OpenTSDB developers (Benoît Sigoure and Chris Larsen) for their brilliant work in building such great software and for embracing the asyncbigtable library. Their insights and code contributions helped us deal with some serious issues. Also, we would like to thank the Google Cloud Bigtable team because they expressed genuine interest in this project and they were very generous in providing us with cloud infrastructure and excellent support.

Instead of using ‘fly in ointment’, I have used ‘Bug in Ointment’ because in this prolonged Australian summer, my backyard is full of bugs (to the sheer delight of my bug-loving son, at the same time causing much anxiety among the rest of us). When your backyard is full of bugs and you get bugs in a database, it’s only natural to customize the idioms.

Oracle 12c has been warming up the hearts of database aficionados in various ways with its features. One of the celebrated features is the online datafile moving and renaming. Lots has been written about it and suffice to say that we don’t need any down time in order to move, rename, or copy the data files anymore. It’s an online operation with zero down time incurring a slight performance overhead.

I was playing with this feature on my test system with Oracle 12.1 on OEL 6, and when moving a datafile in a pluggable database I got this error:

A test upgrade from 11g to 12c and conversion to a container and pluggable database recently pointed out some important 12c behavior differences with respect to the DBCA and whether or not it automatically applies PSUs installed in the Oracle Home.

The original objective was to take an existing 11.2.0.4 database and upgrade it to 12.1.0.2 and convert it to a PDB.

From a high level the procedure was:

Install the Oracle 12.1.0.2 software and apply the latest PSU (in this case the JAN2016 PSU).

Create a new CDB to house the upgraded database.

Upgrade the 11.2.0.4 database to 12.1.0.2 in-place using the DBUA.

Convert the upgraded 12c database to a PDB (via the clone through DB link method).

Seemed pretty straight forward. However as part of the PDB conversion (running of the noncdb_to_pdb.sql script), the following error was encountered:

The noncdb_to_pdb.sql script can only be run once so at this point the PDB conversion has failed and must be restarted. But first we must understand what went wrong or what steps we missed.

Root Cause: DBCA no longer automatically applies PSUs

It’s obvious from the ORA-04045 error that the issue is related to patching. But the question is still what was missed in the process since the 12c Oracle Home was fully patched before creating or upgrading any databases?

The problem is that DBAs have maybe become complacent with respect to PSU applications after creating databases. With Oracle Database 11g whenever we created a database via the DBCA, the latest PSU was automatically applied. It doesn’t matter if we created the database from a template or used a custom install. Regardless of which DBCA method was used, after DB creation we’d see something similar to:

Clearly the latest PSU (JAN2016 in this case) installed in the Oracle Home was applied automatically by the DBCA. And of course this is reflected in the official README documentation (in this example for DB PSU patch 21948347 [JAN2016] – requires a My Oracle Support login to view) which states:

There are no actions required for databases that have been upgraded or created after installation of PSU 11.2.0.4.160119.

However this functionality has completely changed with Oracle Database 12c! The change in behaviour is documented in My Oracle Support (MOS) Note: “12.1:DBCA (Database Creation) does not execute “datapatch” (Doc ID 2084676.1)” which states:

DBCA does not execute datapatch in Oracle 12.1.0.X. The solution is to apply the SQL changes manually after creating a new Database

You must execute the steps in Section 3.3.2, “Loading Modified SQL Files into the Database” for any new or upgraded database.

This is a significant change in behaviour and is the root cause of the PDB creation error!

Resolving the “ORA-00600 [kql_tab_diana:new dep]” error

Back to the CDB creation error, the first logical place to check whenever experiencing plug-in or PDB creation errors is the PDB_PLUG_IN_VIOLATIONS view:

SQL> CREATE PLUGGABLE DATABASE MY_PROD FROM NON$CDB@clone_link FILE_NAME_CONVERT=('/u01/app/oracle/oradata/MY_PROD','/u01/app/oracle/oradata/CPRD1/MY_PROD');
Pluggable database created.
SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;
NAME TYPE STATUS MESSAGE ACTION
-------- --------- --------- ---------------------------------------- ----------------------------------------
MY_PROD ERROR PENDING PDB plugged in is a non-CDB, requires no Run noncdb_to_pdb.sql.
ncdb_to_pdb.sql be run.
MY_PROD WARNING PENDING CDB parameter compatible mismatch: Previ Please check the parameter in the curren
ous '11.2.0.4.0' Current '12.1.0.2.0' t CDB
MY_PROD WARNING PENDING Service name or network name of service Drop the service and recreate it with an
MY_PROD in the PDB is invalid or conflic appropriate name.
ts with an existing service name or netw
ork name in the CDB.
SQL>

Nothing there is really concerning yet. It’s pretty much what we’d expect to see at this point. However, taking the next step in the PDB clone process encounters the error:

SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;
NAME TYPE STATUS MESSAGE ACTION
-------- --------- --------- -------------------------------------------------- --------------------------------------------------
MY_PROD ERROR PENDING SQL patch ID/UID 22139226/19729684 (Database PSU 1 Call datapatch to install in the PDB or the CDB
2.1.0.2.160119, Oracle JavaVM Component (Jan2016))
: Installed in the PDB but not in the CDB.
MY_PROD ERROR PENDING PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
b.sql be run.
MY_PROD WARNING RESOLVED Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
the PDB is invalid or conflicts with an existing te name.
service name or network name in the CDB.
MY_PROD WARNING RESOLVED CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
0.4.0' Current '12.1.0.2.0'
MY_PROD WARNING PENDING Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
n NULL. CDB installed version 12.1.0.2.0.
MY_PROD WARNING PENDING Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
NULL. CDB installed version 12.1.0.2.0.
6 rows selected.
SQL>

At this point since the CDB clone has failed and since the noncdb_to_pdb.sql script cannot be run twice, the new PDB should be dropped. Resolving the root cause of the error by patching and then repeating the clone is necessary.

Applying the PSU

Fortunately the fix is conceptually simple: apply the PSU patch into the database. Though the catch is that I actually had installed the “Combo of 12.1.0.2.160119 OJVM PSU and 12.1.0.2.160119 DB PSU (Jan 2016)” bundle patch (22191659) into the Oracle Home. This combo includes the DB PSU (patch 21948354) plus the OJVM PSU (patch 22139226). And while the DB PSU can be applied without outage, the OJVM patch cannot. Instead for the OJVM patch or the combo, the CDB and the PDBs must all be restarted in UPGRADE mode.

Restarting in UPGRADE mode is fine in this case study where the CDB was just recently created to house the newly upgraded PDB. But if trying to plug the new database into an existing CDB with other applications running in production, shutting down the entire CDB to run datapatch may cause a problem.

Following the README documentation for the just the JAN2016 DB PSU (patch 21948354) doesn’t help. It states that the patch can be applied the database and pluggable databases open (section “3.3.2 Loading Modified SQL Files into the Database“). However because I’ve installed the combo patch into the Oracle Home, trying to patch with the database open will cause the patching to fail:

$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Fri Mar 4 15:45:27 2016
Copyright (c) 2015, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3260_2016_03_04_15_45_27/sqlpatch_invocation.log
Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of SQL patches:
Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
Installed in the binary registry only
Bundle series PSU:
ID 160119 in the binary registry and not installed in any PDB
Adding patches to installation queue and performing prereq checks...
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED
Nothing to roll back
The following patches will be applied:
22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016))
21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))
Error: prereq checks failed!
patch 22139226: The pluggable databases that need to be patched must be in upgrade mode
Prereq check failed, exiting without installing any patches.
Please refer to MOS Note 1609718.1 and/or the invocation log
/u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3260_2016_03_04_15_45_27/sqlpatch_invocation.log
for information on how to resolve the above errors.
SQL Patching tool complete on Fri Mar 4 15:45:52 2016
$

The solution to this error is to start the CDB and PDBs in UPGRADE mode (as per the OJVM patch documentation) and then re-run datapatch:

Note that first time the error was related to the OJVM PSU patch and stated that the PDB was patched but the CDB was not. Now after patching the CDB the error message states that the DB PSU patch is installed in the CDB but not the PDB.

Again the solution is to run datapatch one more time. Fortunately since we’re only patching a PDB, we no longer need to worry about starting the CDB and PDBs in UPGRADE mode to apply the OJVM patch. The OJVM patch does not apply to the PDBs. Hence we can patch successfully with both the CDB and PDBs open:

What started out as an issue when cloning a non-CDB into a PDB led to some learning about patching with Oracle Database 12c.

The most important take-away is that Oracle Database 12c introduces a change in behaviour when it comes to patch applications through the DBCA. This change is well documented in both the patch and MOS documents. So if a DBA reads through the documentation thoroughly, they won’t have a problem. However if the DBA is used to doing things the “old way” and only skims through the documentation they may unexpectedly get caught with errors such as the ORA-00600 encountered when creating a PDB through cloning.

User management on Linux systems can be tedious, and when you add in more than 10 systems the chances are it is going to take a good amount of time for you to manage user accounts on all systems individually.

There are various tools available to overcome this, and all of these use LDAP in some way.

The same goes for Red Hat Directory Server, which is an extension of LDAP by Red Hat to provide centralized user management. Though I have primarily demonstrated integration with Red Hat Directory Server with Linux systems, it can be used on all systems which supports LDAP authentication.

You can find the official Red Hat Directory Server installation guide here.

For our test scenario I used two RHEL 5 servers named as Server101 which is the Red Hat Directory Server and Server201 which is the client.

For RHEL based systems you need to make sure that you are subscribed to RHDS repo for installing Red Hat Directory Server. If you are using CentOS or other derivatives you can use 389-Directory Server which is upstream for Red Hat Directory Server.

Once you have the prerequisite ready you can start with installation.

Installing Red Hat Directory Server

I have designated server101 as Red Hat Directory Server.

Below are the steps to Install packages required for Red Hat Directory Server.

[root@server101 ~]#yum install redhat-ds -y

Installing RHDS

Once the installation is complete we can move to configuring Red Hat Directory Server.

Configuring Red Hat Directory Server

[root@server101 ~]# setup-ds-admin.pl

Once you run this command you will be prompted for inputs by the setup script which are mostly straight forward.

But there are few things that need to be taken care of before we proceed with the configuration.

We want to run the ldap service as ldap user, so create ldap user and group if its not there.

Then open the below ports on your firewall/iptables so that directory server can work properly.

389 for LDAP service

686 for secure LDAP service

9830 for directory server admin console connectivity

You should also increase the number of file descriptors as it can help Red Hat Directory Server access files more efficiently. Editing the maximum number of file descriptors the kernel can allocate can also improve file access speeds.

First, check the current limit for file descriptors in /proc/sys/fs/file-max

If the setting is lower than 64000, edit the /etc/sysctl.conf file, and reset the fs.file-max parameter and set it to 64000 or up.

Then increase the maximum number of open files on the system by editing the /etc/security/limits.conf configuration file. Add the following entry

* - nofile 8192

After this we can start configuring Red Hat Directory Server with setup-ds-admin.pl command.

Once it is executed it will prompt for inputs which are mostly self explanatory, like below. Mostly we will accept the default values, as this is a fresh installation. We will only change the system user and group which will run ldap service from nobody to ldap user we created earlier. Don’t forget to make a note of passwords for admin and Directory Manager as it will be used to login to Admin Console.

[root@server101 ~]# setup-ds-admin.pl -k

============================================================================== This program will set up the Red Hat Directory and Administration Servers.

It is recommended that you have “root” privilege to set up the software. Tips for using this program: – Press “Enter” to choose the default and go to the next screen – Type “Control-B” then “Enter” to go back to the previous screen – Type “Control-C” to cancel the setup program

Would you like to continue with set up? [yes]: yes

============================================================================== BY SETTING UP AND USING THIS SOFTWARE YOU ARE CONSENTING TO BE BOUND BY AND ARE BECOMING A PARTY TO THE AGREEMENT FOUND IN THE LICENSE.TXT FILE. IF YOU DO NOT AGREE TO ALL OF THE TERMS OF THIS AGREEMENT, PLEASE DO NOT SET UP OR USE THIS SOFTWARE.

Do you agree to the license terms? [no]: yes

============================================================================== Your system has been scanned for potential problems, missing patches, etc. The following output is a report of the items found that need to be addressed before running this software in a production environment.

1. Express Allows you to quickly set up the servers using the most common options and pre-defined defaults. Useful for quick evaluation of the products.

2. Typical Allows you to specify common defaults and options.

3. Custom Allows you to specify more advanced options. This is recommended for experienced server administrators only.

To accept the default shown in brackets, press the Enter key.

Choose a setup type [2]:

============================================================================== Enter the fully qualified domain name of the computer on which you’re setting up server software. Using the form <hostname>.<domainname> Example: eros.example.com.

To accept the default shown in brackets, press the Enter key.

Computer name [server101.suratlug.org]: server101.example.com

============================================================================== The servers must run as a specific user in a specific group. It is strongly recommended that this user should have no privileges on the computer (i.e. a non-root user). The setup procedure will give this user/group some permissions in specific paths/files to perform server-specific operations.

If you have not yet created a user and group for the servers, create this user and group using your native operating system utilities.

System User [nobody]: ldap System Group [nobody]: ldap

============================================================================== Server information is stored in the configuration directory server. This information is used by the console and administration server to configure and manage your servers. If you have already set up a configuration directory server, you should register any servers you set up or create with the configuration server. To do so, the following information about the configuration server is required: the fully qualified host name of the form <hostname>.<domainname>(e.g. hostname.example.com), the port number (default 389), the suffix, the DN and password of a user having permission to write the configuration information, usually the configuration directory administrator, and if you are using security (TLS/SSL). If you are using TLS/SSL, specify the TLS/SSL (LDAPS) port number (default 636) instead of the regular LDAP port number, and provide the CA certificate (in PEM/ASCII format).

If you do not yet have a configuration directory server, enter ‘No’ to be prompted to set up one.

Do you want to register this software with an existing configuration directory server? [no]:

============================================================================== Please enter the administrator ID for the configuration directory server. This is the ID typically used to log in to the console. You will also be prompted for the password.

============================================================================== The information stored in the configuration directory server can be separated into different Administration Domains. If you are managing multiple software releases at the same time, or managing information about multiple domains, you may use the Administration Domain to keep them separate.

If you are not using administrative domains, press Enter to select the default. Otherwise, enter some descriptive, unique name for the administration domain, such as the name of the organization responsible for managing the domain.

Administration Domain [example.com]:

============================================================================== The standard directory server network port number is 389. However, if you are not logged as the superuser, or port 389 is in use, the default value will be a random unused port number greater than 1024. If you want to use port 389, make sure that you are logged in as the superuser, that port 389 is not in use.

Directory server network port [389]:

============================================================================== Each instance of a directory server requires a unique identifier. This identifier is used to name the various instance specific files and directories in the file system, as well as for other uses as a server instance identifier.

Directory server identifier [server101]:

============================================================================== The suffix is the root of your directory tree. The suffix must be a valid DN. It is recommended that you use the dc=domaincomponent suffix convention. For example, if your domain is example.com, you should use dc=example,dc=com for your suffix. Setup will create this initial suffix for you, but you may have more than one suffix. Use the directory server utilities to create additional suffixes.

Suffix [dc=example, dc=com]:

============================================================================== Certain directory server operations require an administrative user. This user is referred to as the Directory Manager and typically has a bind Distinguished Name (DN) of cn=Directory Manager. You will also be prompted for the password for this user. The password must be at least 8 characters long, and contain no spaces. Press Control-B or type the word “back”, then Enter to back up and start over.

============================================================================== The Administration Server is separate from any of your web or application servers since it listens to a different port and access to it is restricted.

Pick a port number between 1024 and 65535 to run your Administration Server on. You should NOT use a port number which you plan to run a web or application server on, rather, select a number which you will remember and which will not be used for anything else.

Administration port [9830]:

============================================================================== The interactive phase is complete. The script will now set up your servers. Enter No or go Back if you want to change something.

We will use Directory Server admin console to connect to the GUI and will create ldap user from there.

We can invoke directory server admin console gui with redhat-idm-console. It will open a GUI like below.

Directory Server Admin Console GUI

The user id is directory manager which was created during directory server setup, mostly it will be default as cn=Directory Manager. Now put your password and Administration url as http://server101:9830.

Directory Server Admin Console

Once you login you will be presented with console screen as below.

Now click on Users and Groups tab and then click on create button, there select user from the menu.

Now Select organizational unit, mostly we will use the default and will select people from the list as below.

It will open Create User menu.

Now we will create ldapuser account as shown below. Fill in required details. Also select posix user tab as we need the account for unix system login. Fill up required details for posix account as well.

Now that we have created user account we can start configuring client.

Configuring Linux client for LDAP login

I have created server201 which we will configure for LDAP login.

For that we need to execute authconfig-tui from console.

It will open a terminal ui to configure authconfig to use LDAP.

[root@server201 pam.d]# authconfig-tui

Select Use LDAP for user information.

Select Use LDAP Authentication.

After this we need to make sure when user login on the server with LDAP authentication the home dir is created automatically, which is not enabled by default.

We can do this by executing below command at console.

[root@server201 pam.d]# authconfig –enablemkhomedir –update

Once this is done you can now use your ldap user to login to client server.

Now that we have created LDAP, we can use it to centralized login for all linux systems in the environment.

The user management is easy from single location.

We can also configure TLS and do replication for redundancy.

We can define schema and policies as well but that is for another time.

Maintaining mission critical databases on our pitchfork wielding brother, the “Daemon” of FreeBSD, seems quite daunting, or even absurd, from the perspective of a die-hard Linux expert, or from someone who has not touched it in a long time.

Planets9s: Sign up for our best practices webinar on how to upgrade to MySQL 5.7

Network tab under DevTools can be used to debug wide variety of issues. It records every requests made when a web page is loading. It captures wide range of information about every request like HTTP access Method, status and time took to complete the request etc.

By clicking on any of the requested resource, we will be able to get more information on the request.

In this case, the interesting bit was under the Preview tab. The Preview tab captures the data chrome got back from the search and store it as objects.

A successful query would look like the image below captured from Kibana3 of public website logstash.openstack.org.

Field Data Breaker – Percentage of JVM heap memory allowed to load a single data field into memory. The default value is 60%. We recommend raising this limit if you are uploading data with large fields. indices.breaker.fielddata.limit For more information, see Field data in the Elasticsearch documentation.

Any applications your company runs on premise can also be run in the public cloud. But does that mean they should be?

While the cloud offers well-documented benefits of flexibility, scalability, and cost efficiency, some applications — and especially business-critical enterprise applications — have specific characteristics that can make them tricky to move into a public cloud environment.

That’s not to say you shouldn’t consider the cloud as an option, but you should be aware of the following enterprise application needs before you make any migration decisions:

1. Highly customized infrastructure

Enterprise applications often rely on software components that are uniquely configured: they may need very specific storage layouts and security settings or tight integration with certain third-party tools. That makes it hard to replace them with generic platform-as-a-service (PaaS) alternatives in the cloud. The same is true on the infrastructure side: application software components often need particular network configurations and controls that aren’t available from a typical infrastructure-as-a-service (IaaS) offering. (An example would be the way Oracle Real Application Clusters have to allow the cluster software to manipulate network settings, such as controlling IP addresses and network interfaces.)

2. Tightly coupled components

Today’s cloud application architectures are based on “microservices” — collections of services that perform specific tasks. When combined, these answer the whole of the application requirements. With enterprise applications, there are so many interdependencies between the various software components that it can be extremely difficult to change, upgrade, move, or scale an individual component without having a huge impact on the rest of the system.

3. Siloed IT departments

Enterprise applications are usually supported by siloed enterprise IT operations — DBAs, system administrators, storage administrators, network administrators and the like — each with their own responsibilities. Cloud deployment, on the other hand, requires much greater focus on collaboration across the IT environment. This means breaking down traditional silos to create full-stack teams with vertical application ownership. Some teams are likely to resist this change as they could end up with significantly less work and responsibility once the management of application components has shifted to the cloud vendor. So migrating to the cloud isn’t just a technical decision; it has people-process implications, too.

4. Costly infrastructure upgrades

Every company knows upgrading enterprise applications is a major undertaking and can often cause downtime and outages. This is true when the application stays inside your own data center — and doubly so when it moves to a cloud provider due to how long it takes to move massive amounts of data through the Internet and risks associated with unknown issues on the new virtual platform. For these reasons, significant financial commitment is often required to build and maintain an IT team with the right skills to do upgrades quickly and effectively as well as maintain the system.

5. Inflexible licensing models

The components used in enterprise applications are often proprietary products with licensing models that are not compatible with the elasticity of the cloud. For example, many Oracle licenses are for legacy applications and can used only on particular systems. Transferring those licenses to a cloud-based infrastructure is not an easy task.

In addition, perpetual software licenses are often not portable to the typical pay-as-you-go model used by most cloud providers. Plus, most software vendors don’t have any incentive to transition their customers from locked-in perpetual licenses with a steady maintenance revenue stream to a model that allows them to switch to a competitive product at any time.

Even though the nature of enterprise applications makes them difficult to migrate to the cloud, the benefits of doing so — in costs savings, availability, and business agility — still make it a very compelling proposition. In my next blog, I’ll take a look at some of the paths available to you should you decide to move your enterprise applications to the public cloud.

True inspiration comes in many forms and I consider myself fortunate to be inspired every day by the women around me. International Women’s Day is the perfect opportunity to reflect on the women in our lives who positively influence us.

This post is my heartfelt thank you to the women in my ‘circle’ who have made an indelible mark in my life. They are women who continue to inspire, challenge and motivate me.

The women on Pythian HR’s team: These women continually teach me valuable life lessons. They are mothers, partners, sisters, care providers, aunts, cousins, and friends to many. They are strong, spirited, supportive and have generous natures that are contagious. They demonstrate an unwavering commitment to working hard, they’re incredibly talented and they have a steady focus on doing what’s best for our employees. These women go above and beyond and approach every puzzle with optimism.

My mother: My mother is the most positive and ‘glass half full’ person that I know. She is a person who never fails to find the bright side to life’s most thought-provoking issues and one of her favourite questions to ask her loved ones is “Are You Happy?” (Spoiler alert: she’s not satisfied unless the answer is a truthful “yes”). Her love, guidance and support have helped sustain me through so much and over the years she has evolved into my BFF.

My friend, Jen: Jen is a breast cancer survivor who decided to fight back and co-found Vixens Victorious. In October 2015, the dynamic duo of Vixens Victorious successfully launched Lights! Camera! CURE! which showcases female film makers from Canada and the proceeds go to support the Ottawa Regional Cancer Society. Jen’s positive spirit and take charge attitude empowers everyone who meets her.

My friend, Kate: Kate moved to Canada with her three month old daughter to start a new journey with her husband. She took the initiative to make new friends, develop a network and often navigate a new city on her own when her partner travelled for work. Kate isn’t one to complain about adversities in life; she is courageous and gratefully embraces her adventure.

My fitness trainer Jules: Jules gets out of bed every morning, puts on her workout gear and travels across Ottawa to provide the most fun and effective workouts to her clients. She generously shares her own personal health journey and always finds a way to connect with her clients so they can experience the one on one attention they need. She is full of greatness.

Our family physician, Dr. Judy: Dr. Judy’s medical practice is thriving because of her commitment to patient care. She ensures you are her priority in the moments that you are with her. She makes each of her patients feel important, cared for and heard. Dr. Judy emulates a kind and caring nature that everyone could benefit from.

My neighbor, Anne Marie: In her late forties, Anne Marie taught herself to swim so she could begin competing in triathlons. She now travels internationally to compete in races. I’m inspired by her hard work, determination and strategic ability to set and meet goals.

The influences (sometimes subtle) of these women make an impact on how I choose to live my life. I am thankful for all of them.

On this International Women’s Day, I encourage you to think about who inspires you and why. Bonus points if you honour them with words of appreciation!

There were mixed reactions on the internal SQL Server teams. I was afraid to respond to the thread, fearing I would get burned again. I quickly checked the date to confirm that the article hadn’t been resurrected.

One of the sentiments expressed in our internal chatter was that some of the DBAs love “Satya’s Microsoft” and I agree. I like what they’re doing, but I am very skeptical about the move to port SQL Server onto Linux.

I doubt this will enable new markets and new customer bases. I don’t think there are any large organizations who will suddenly decide to adopt the product because it will run on Linux.

One theory was that this move was to attract new developers who want to deploy multi-platform tech stacks. That could certainly be right, but I think PaaS satisfies that and many of the startup natures.

Other articles I read theorized it was a move towards SQL Server on Linux-powered containers.

I’m wondering what this will mean for future features. Will PowerShell or .NET be ported to Linux? What will change in the security model? Will clustering be available? Will a more RAC-like feature be available?

These are very interesting times and while this wasn’t a move that I was pining for, or even expected, I am excited to see where this is going.

RSAC is my annual check in to learn new approaches to information security, discover new technology, learn from industry experts and build my community.

In the three years that I have been attending RSAC, I have learned that Pythian is unique and so are our clients. Each year, we continue to improve our security program with our clients in mind.

RSAC Day 1

It’s Day 1 of the RSAConference 2016. Monday’s are typically a quiet day with vendors setting up in the expo halls, conference staff getting organized, attendees registering and a few press/analysts looking for optimal interview spots. It has been the calm before the storm of attendees descend on San Francisco and RSAC.

This Monday was a whirlwind of activity; CSA Summit, DevOps Connect, Information Security Leadership Development and IAPP: Privacy and Security to name only a few. Chances are you may have missed sessions if you weren’t early enough.

Privacy and Security were hot topics given the European General Data Protection Regulation (GDPR) agreement reached December 2015.

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members since many of you will be in town for Percona Live that week. Here are the details:

What: The MySQL Community Dinner

When: Tuesday April 19, 2016 – 7:00 PM at Pedro’s (You are welcome to show up later, too!)

Maintaining mission critical databases on our pitchfork wielding brother, the “Daemon” of FreeBSD, seems quite daunting, or even absurd, from the perspective of a die-hard Linux expert, or from someone who has not touched it in a long time. The question we ask when we see FreeBSD these days is “why?”. Most of my own experience with FreeBSD was obtained 10-15 years ago. Back then, in the view of the team I was working on, a custom compiled-from-source operating system like FreeBSD 5.x or 6.x was superior to a Linux binary release.

Package managers like YUM and APT were not as good. They did not always perform MD5 checks and use SSL like today’s versions. RedHat wasn’t releasing security updates 5 minutes after a vulnerability was discovered. Ubuntu didn’t exist. Debian stable would get so very old before receiving a new version upgrade. FreeBSD was a great choice for a maintainable, secure, free open source UNIX-like OS with tight source control and frequent updates.

Most people do not understand why FreeBSD remains a great choice for security and stability. The main reason is that the entire source of the base OS and the kernel (not just the kernel) are tightly maintained and tested as a whole, monolithic, distribution.

FreeBSD 10.2 is different than versions I worked on many years ago, in a good way, at least from the standpoint of getting started. First, “pkg” has gotten quite an overhaul, making installing packages on FreeBSD as easy as with YUM or APT. portsnap and portmaster make port upgrades much easier than they used to be. freebsd-update can take care of wholesale updates of the operating system from trusted binary sources without having to “build the world”. These are welcome changes; ones that make it easier to get to production with FreeBSD, and certainly made the task of rapidly building and updating a couple of “lab” virtual machines easier.

In my effort to get re-acquainted with FreeBSD, I hit some snags. However, once I was finished with this exercise, FreeBSD had re-established itself in my mind as a decent flavor to host a mission critical database on. Open Source enthusiasts should consider embracing it without (much) hesitation. Is there some unfamiliar territory for those who only use MySQL on MacOS and Linux? Sure. But it is important to remember that BSD is one of the oldest UNIX like operating systems. The OSS world owes much heritage to it. It is quite stable and boring, perhaps even comfortable in its own way.

Problem 1: forcing older versions of MySQL

I needed to install MySQL 5.5 first, in order to test a mysql upgrade on FreeBSD. However, when installing percona-toolkit either via “pkg install” (binary) or /usr/ports (source), the later 5.6 version of the mysql client would inevitably be installed as a dependency. After that point, anything relating to MySQL 5.5 would conflict with the 5.6 client. If I installed in the opposite order, server first, percona-toolkit second, the percona-toolkit installation would ask me if it is OK to go ahead and upgrade both server and client to 5.6.

TIP: don’t forget make.conf

my /etc/make.conf:MYSQL_DEFAULT?= 5.5

Once I added MYSQL_DEFAULT into make.conf, the installations for MySQL 5.5 became seamless. Note: if you want another flavor of MySQL server such as Percona Server, install the server “pkg install percona55-server” prior to “pkg install percona-toolkit” so that the client dependencies are met prior to installation.

Problem 2: Some tools don’t work

pt-diskstats does not work, because it reads from /proc/diskstats, which does not exist on FreeBSD. Other favorites like htop don’t work right out of the box. So far I have had good luck with the rest of the Percona toolkit besides pt-diskstats, but here’s how you get around the htop issue (and perhaps others).

As you may have determined, these commands ensure that the linux compatibility kernel module is loaded into the kernel, and that the linux style /proc is mounted in a different location than you might be used to “/compat/linux/proc”. The FreeBSD /proc may also be mounted.

Problem 3: I want bash

# pkg install bash … and once that’s done# pw user mod root -s /usr/local/bin/bash …and repeat ^^ for each user you would like to switch. It even comes with a prompt that looks like CentOS/RHEL.[root@js-bsd1 ~]#

Problem 4: I can’t find stuff

BSD init is much simpler than SysV and upstart init frameworks so your typical places to look for start files are /etc/rc.d and /usr/local/etc/rc.d. To make things start on boot, it’s inevitably a line in /etc/rc.conf.

In our case, for MySQL, our start file is /usr/local/etc/rc.d/mysql-server. To have MySQL start on boot, your rc.conf line is:mysql_enable="YES".

If you do not wish to make MySQL start on boot, you may simply say "/usr/local/etc/rc.d/mysql-server onestart"

Notes on binary replacement

Please note, just like in the Linux world, MariaDB and Percona Server are drop in replacements for MySQL so, the startfiles and enable syntax does not change. Your default location for my.cnf is /etc/my.cnf just like in the rest of the known universe.

This command lists all installed packages.pkg info -a

use pkg remove and pkg install to add new versions of your favorite mysql software.

I ran into no greater issues with pkg than I would with yum or apt doing binary removals and installations, and no issues at all with mysql_upgrade. Remember: If you had to alter make.conf like I did earlier, remember to update it to reflect versions you want to install.

For those who like ZFS, the FreeBSD handbook has a very detailed chapter on this topic. I for one like plain old UFS. It might be the oldest filesytem that supports snapshots and can be implemented very simplistically for those who like low overhead.

On a scorching November day in 2012, Sydneysiders were bracing themselves for yet another heat wave when all of a sudden they became pleasantly surprised as an elastic cloud occupied the tech skies. On November 12, 2012, Amazon announced the New Asia Pacific (Sydney) Region in Australia.

Before that, Australian customers had to reach out to Japan or Singapore for their cloud needs. That was not really feasible, as it increased up-front expenses, long-term commitments, and scaling challenges. Amazon recognized that and Sydney became another region in the world.

They have now taken it a step further. They have rendered a new Availability Zone (AZ) in Sydney. Availability zone (AZ) is basically an isolated location within data centre regions from which public cloud services originate and operate.

The new availability zone is ap-southeast-2c. This is all set to provide enhanced performance and sociability to Australian customers. This will enable them to fully leverage the potential of technologies like Lambda, the Elastic File System shared filesystem, and Amazon RDS for MS SQL Server.

Pythian’s established presence in Australia and New Zealand coupled with round the clock and world class support for AWS, SQL Server, and other cloud technologies, enables it to support Australian and New Zealand customers from the word go.

Recently we’ve received an alert from one of our clients that running threads are high on one of their servers. Once we logged in, we noticed that all the selects were waiting for table level read lock. We scrolled through the process list, and found the selects which were causing the problems. After killing it, everything went back to normal. At first we couldn’t understand why the query took so long, as it looked like all the others. Then we noticed, that one of the WHERE clauses was strange. There, we found a SLEEP(3) attached with OR to the query. Obviously, this server was the victim of a SQL injection attack.

What is SQL injection?

I think most of us know what SQL injection is, but as a refresher, SQL injection is when someone provides malicious input into WHERE, to run their own statements as well. Typically this occurs when you ask a user for input, like username, but instead of a real name they give you a MySQL statement that will be run by your server without you knowing it. Based on the picture, let’s see a few examples. We have a simple table:

If we don’t know the name of the table, we can still cause trouble by blocking access to the database If we insert SLEEP() in the WHERE part, then it will be executed for every matching row… if we inject it like: “OR SLEEP(n)”, it will be executed to every row in the table! Okay, this will be “just” a long running select. It shouldn’t cause much trouble thanks to InnoDB and transaction isolation, unless something needs a table lock.

Some common examples of what causes table locks are:

explicit lock table

insert/update/delete on MyISAM

ALTER table on InnoDB

Once statements start waiting for lock on the table, all proceeding selects will wait for the previous locking statement to finish

As we see in the example, ALTER table will wait until it can get a lock on post table, and this blocks every other select from now on to the table. Or, if you are using MyISAM table, a simple update/insert will block access to the table, because it needs table level lock during them.

How can we defend ourselves from SQL injection?

There are several ways to secure yourself from SQL injection.

First of all, validate the input. If you expect only letters and numbers, filter it with regexp for example, to make sure there are no special characters there. Also escape the inputs on application side; programming languages have built-in function to do that (eg.: mysql_real_escape_string() in PHP)

Use prepared statement! It won’t allow 2 clause if you specified only 1. When you use prepared statements, the variables are transmitted as MySQL variables. Even if the string is not escaped, it will end up in one variable, and MySQL treats is as a longer string. (For more details see: http://dev.mysql.com/doc/refman/5.7/en/sql-syntax-prepared-statements.html )

Use a tool like MySQL Enterprise Firewall, which is a plugin for MySQL and can filter your statements to make sure there are no things like: || 1==1

I would like to start a little talk about this, so if you encountered SQL injection before, would you share it with us, how they did it, or in general how do you prevent SQL injections in your application?

As the winter in the Northern hemisphere is giving way to spring, slowly but surely, blog posts are blooming in the gardens of Oracle, SQL Server and MySQL. This Log Buffer plucks some of them for your reading pleasure.