In both cases, the steps seem to be the same and quite straight forward:

Upgrade secondary members of the replica set

Step down the replica set primary to secondary, so an upgraded one becomes primary

Upgrade the previous primary so all are on the same version

In this case, using docker, the upgrades of the instances should be as easy as changing the version tag in the docker-compose.yml.
So, one at a time:
As my current primary is db01, I’ll start with db02. The change is just a version number in the file, so I’m not pasting the whole file here:

db02:
image: mongo:3.2

A docker-compose up -d brought db02 down, replacing it with an updated mongod 3.2 and repeating and watching rs.status(), I could see the machine disapear and the re-sync.NICE
Repeat it for db03NICE again

Next step – step down
Running rs.stepDown on the PRIMARY db01 makes db03 turn PRIMARY and leaves db01 a SECONDARY, ready to update to 3.2 as well…

BUT WAIT!

This made me aware of the fact that I forgot to update my application configuration. While I extended the cluster to a 3-host-system, I did not add db03 to the applications mongo server config and the application server’s /etc/host – which I quickly changed at this point.

Changing the db01’s image to 3.2 now and running docker-compose up -d did update the image/container and restart it – but rs.status() made me also aware that – according to their uptime – the other instances seem to have been restarted as well.

So, there must be a way to update/restart single services of docker-compose, right? Let’s check during the upgrade from 3.2 to 3.4

Now that all 3 containers are running the 3.2 image, the SECONDARYs can be updated as well. The line changed in the docker-compose.yml:

version: '3'
services:
db01:
image: mongo:3.4
...

Now, instead of running a full docker-compose up -d, it seems the way to go is

Step One – Configure and boot a single mongo server with docker

It gives details about the basics (set up containers, start a replica set). My goal is to go a little bit furhter, though. In addtion to what the article suggests, I’d like to have the data of each container in a data volume. And I’d like to use docker-compose to keep the whole setup in order.

Using the version 3 syntax of docker-compose, I come up with a very basic initial file to start from:

To check if the machine is up, I connect to the second mongod instance from another machine with mongo --port 30002. Of course, this is – as of right now – only a separate single instance of mongod and not a replicaSet, as confirmed by a quick check of the replication status:

At this point, I decided to make this another mongo exercise and start the replicaSet with only two servers, import my data, and only later add the third machine.

So, to get this dual-setup running, we need to tell the machines what replicaSet they are part of. This can be done with a command line option on mongod (--replSet), but I wanted to make it more versatile and put some options into a config file for mongo and start the daemon by telling it where to pull the config from.

So, in a subfolder etc, the simple config file etc/mongod.conf is created:

replication:
oplogSizeMB: 400
replSetName: rs0

(the oplog size is a random number here and should be correctly adjusted in production environments)

Now we need to map this file into the containers and tell mongod to read it during startup:

start the container with the additional options, resulting in mongod --conf /etc/mongod.conf

Until now, I was under the impression I could spin up a mongo cluster just like that, but some research and this question on stack overflow made me aware that it won’t work withough a little bit of shell command line.

So, let’s init the replSet

To get the set working, we need to define a config in the mongo shell, for exaple like this:

(Note: as the machines connect internally, the internal ports 27017 need to be used, not the exposed ones)

However, to make this work, the containers need to be known as db01 and db02. They autom automatically got a generated name by docker-compose. So the names have to be added in the docker-compose file to be manually set:

Now it’s time to import some data into the cluster and try to connect my existing application to the new cluster.
It should be noted here that the application is NOT running as part of the docker setup but is intended to connect to the ports exposed.

quick break, have some coffee while we wait for mongoimport to finish

Importing the data to the cluster with a mongoimport shell script on the primary server is not a but problem, but my PHP and old \MongoClient based application seems to have a problem:

MongoConnectionException
No candidate servers found

MongoConnectionException
MongoClient::__construct(): php_network_getaddresses: getaddrinfo failed: Name or service not known

Looks like that fact that using different IPs and ports “on the outside” (the configuration exposed by docker) is not good enough for the php mongo driver.
To circumvent this, let’s try to match internal and external configurations:

First, match up internal and external mongod ports by changing the internal ones:

The command is extended to start mongod internally on port 30001 (30002 for db02) while still exposing it on the same port.

Then the hostnames db01/db02 are added to the application server’s /etc/hosts so there is no problem resolving the name

192.168.10.20 db01
192.168.10.20 db02

After another docker-compose up -d, the changed configurations is applied; however, this breaks our cluster!! The primary and secondary have changed their internal ports, so the cluster connection is lost.

To tell the replSet about this, we need to reconfigure the cluster with the changes:

For quite some time I’ve been using a scheduled php script to automatically send twitter updates (tweets) to my own account.

Since it was very easy and convenient to do this with the so called basic authentication, I used/modified the simple script provided by Fabien Potencier.

NOW, BASIC AUTH IS GONE!! (Correction: See Update below)

It was long announced, but since September 1st, they really switched it off and this method does not work anymore because now there is only OAuth.

The “strange” (and more secure) thing about OAuth is, that you allow an application to do something with your twitter account, but you do not give out your credentials to the application, you just grant permissions. In most cases, when you are browsing the web, this makes sense if a third party application wants to do something with your account. But in my case, there is just ME, MY TWITTER ACCOUNT and MY PHP SCRIPT that want to communicate, not a bunch of different users.

So: What do I have to do to make it work again?

Step 1: Officially set up your application in Twitter Apps

On http://dev.twitter.com/apps you can and have to define your application with a name, some details and a callback URL. Details on this one later.

Step 2: Get and configure the TwitterOAuth PHP Package

On Github, one of the Twitter developers maintains a package for PHP. http://github.com/abraham/twitteroauth. Download/git-clone it to your web server. You need to adjust the config.php file in this package with the Consumer Key and Consumer Secret of your newly created twitter application (found int the app details section)

Now that you have the twitteroauth pack on your webserver, you should be able to access it and see the default page with a “Sign in with Twitter” button. If you do this, you will be redirected to a screen you might have seen with other third party applicatios before. Only, this time, it’s yours:

In this case, I want to connect with my @managerator to the application “managerator”, might be a bit confusing. (And please excuse me for using a German UI 😉 )

Now that we have connected application and account, how can we send a tweet from the application to the account?

Check the “index.php” file in the twitteroauth package and you will see, that it loads/requires the necessary php files and afterwards does some API calls on the $connection. And, using a browser, this works. Of course, only if your browser is athenticated on twitter already (and of course with the account that granted access to the application).

Now, in my case I want to use the command line php (like a cron-job) to send messages. And of course, PHP is not authenticated on twitter. Thus, a simple call to “php index.php” on the command line will fail.

So, how do we get around this??

Step 5: Store the access_token

When you look again at the index.php, you will see

/* Get user access tokens out of the session. */
$access_token = $_SESSION[‘access_token’];

This access token is an associative array that contains all the things you need for further authentication – coming from your current browser session

So, simply var_dumping the access_token from your browser session and pasting the details into your code like this