I'm a heavy user of scratch pads with i3, I often don't like the dimensions of a window after you make them floating. As do other people, see here and here2.

I've used a customized version of the solution proposed in one of the comments by the creator of i3-gaps (Airblader) here3.
This has served me well, but one thing bugged me when using multiple monitors it wouldn't center the window correctly, so I made a Python script that first uses Qt to get all screen dimensions and determine the correct offset based on the Mouse position.
It's a bit overkill probably, but it works, so I'm happy with it.

Note that if you update your system in the meantime, it may have to be recompiled at some point, I've experienced this with the lsw command which is using some X calls that changed after I updated from Ubuntu 17.04 -> 17.10.

You may need to install some python3 -m pip install ... when you try to run it you'll probably discover what exactly, I forgot to keep a requirements.txt.
From what I remember you need at least: python -m pip install pyuserinput pyqt5 python-xlib

Step 3: modify resize mode in i3

Probably you already have a "resize" mode, just add something like SHIFT+J and SHIFT+K to that mode to call the python script:

This is not going to be a really great blog post, but it consists of some notes that may be helpful to other people trying similar stuff, also for me; should I upgrade to a future version I can read here what I did the last time regarding customization/tuning of my laptop.

Most of my stuff is checked into git (.-files in my home directory). That stuff really works out of the box and results in immediately having the right terminal fonts, settings, i3 configurations, vimrc, helper programs, and so on.
I'm not going to talk about that kind of configuration.

Slightly irrelevant, but I run Ubuntu Server so it is kind of minimal and run i3 on top of it.
This Intel Skylake Notebook comes with two GPU's which is quite cool:

Intel Corporation HD Graphics 530 (rev 06)

NVIDIA Corporation GM107M [GeForce GTX 960M] (rev a2)

Small notes

Intel GPU issues

Ubuntu versions <= 17.04 I used to have issues with my Intel driver. I don't know why exactly, but I had to boot with nomodeset during boot with grub to disable graphics altogether, and then install the NVidia propiatory drivers, and explicitly disable the Intel GPU.
The commandline tools I use for that are software-properties-gtk and for disabling the Intel driver nvidia-settings.
So the NVidia card was always used, this has the downside that the laptop on battery drains faster. Luckily with Ubuntu 17.10 it seems to work with the Intel driver. Maybe installing a newer kernel on an older Ubuntu version might help.

Switch to init level before/after X is started

systemctl isolate multi-user.target - switch to non graphical mode

systemctl isolate graphical.target - switch to graphical again
I don't think you'll need those commands much, you can also CTRL+ALT+F1 etc. to go to a console and run stuff there.
For example upgrading to a newer distribution can result in i3 restarting, if you started apt in a terminal within i3 it can be annoying .

Global tips

Update your Kernel

I would switch to a newer kernel, better yet maybe choose the newest kernel. At the time of writing for me that was Linux zenbook 4.16.11-041611-generic #201805221331 SMP Tue May 22 17:34:25 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Got my commands from here, and this seems to be the official repo for that tool.

TL;DR

$ ukuu --list
$ xhost +
$ sudo ukuu --install v4.16.11

Fix video tearing/flickering

Stupidly enough I haven't tried it before until today, so I haven't tested the NVidia solution yet. But for Intel it seems to work.
Oh man I've always hated the flickering / weird horizontal (sometimes diagonal) lines while playing any stupid youtube video, apparently super easy to fix:

Fix hibernation

Ubuntu 17.04 I got it working, then upgrading to 17.10 it broke, everything worked, but after it resumed I got a blank screen and nothing worked, apparently that was a regression.
Easily solved by updating the kernel to the newest version (for me), see discussion in comments in the bug report.
The rest of these notes are from memory from a long time ago, so might be incomplete...

First of all, I couldn't manage to get it working with btrfs, but that as quite a few kernel versions ago, so nowadays it might as well work (haven't tried it recently).
Either way I switched to ext4 since.

I also recall that I needed to switch to a non-encrypted swap partition, also maybe nowadays it works with encrypted swap.

My laptop has less than 24GiB ram (16GiB), obviously it needs to be able to store a dump of your entire flash memory.
One small caveat in my case, I had to instruct grub that /dev/nvme0n1p6 is the partitionto

Finally, edit /etc/default/grub, edit something like this:

GRUB_CMDLINE_LINUX_DEFAULT="resume=/dev/nvme0n1p6"

You might as well change that annoying 10s timeout to something like 2s while you're at it . sudo grub-update.

Hardware Accelerated video

Will type this out tomorrow, right now I'm tired!

Tap to click touchpad

Check out the xinput command. xinput list will give you a list of input devices; find the ID of the one which looks like a touchpad. Then do xinput list-props , which should tell you what properties you can change for the input device. You should find one called something like Tapping Enabled and a number in parens after it (in my case, it's libinput Tapping Enabled (276). Finally, run xinput set-prop 1, and tapping should work.

To make the change permanent, find a way to run that command on startup. One way would be to add exec xinput set-prop 1 to ~/.i3/config.

Configure Mouse speed

UPDATE:

It still works, and the android.sh script is still needed for my Huawei P10 it seems, but I found a faster way to sync files. Better use adb-sync!
Compared to the awefully slow performance of mtp it's very fast. Also in my case mtp chokes up when syncing lots of files, or very large files. Seems like the connection is very unstable. With adb-sync I haven't experienced any of this.

Sometimes--actually quite often --I have some weird issues with my Linux laptop.
I could dedicate many blog posts to this kind of stuff like handling three screens, hacks for my touchpad and trackball mouse, weird boot issues with btrfs, etc.
However this is also one is especially weird and I don't want to forget how I fixed it.

Mounting Samsung Galaxy Note 4 on Ubuntu...

Bluetooth would have been very nice, my device is recognized (hcitool scan), but I never got it working using obexfs -b etc.... However after some struggling I got MTP working.

Execute it with bash android.sh and probably only after one or two failures, it will not throw the error.. and you are able to browse your phone via the /android path (in this case). Don't forget to accept the dialog on the Phone, and it will not work if the screen is turned off (consider something like RedEye Stay Awake).

A few caveats! (DO NOT rm stuff from it..)

I recommend you only use this to copy files from the device, and not manipulate it too much, as sometimes the connection does timeout for some weird reason.
When that happens simply abort the android.sh script, try umount -f -l /android and unplug, then plug the device back in again.

This happened when the connection stopped after I did a rm -rf * in the /android/Phone/DCIM/Camera/ path, and the connection got lost, it got screwed up:

So delete stuff using the device itself, install some app like Solid Explorer. Above problem was also fixed by using Solid Explorer to remove the files in Camera (so even though the directory seems to be erroneous, it was caused by something weird with the files, I verified that by also trying to move all the files into another directory, and in that case it was the new directory that was causing the I/O error).

I use CLion in this blog post, but it should be the same for any of the other editors. (PyCharm, PhpStorm, Intellij, etc.).

It took me a while to get a setup that works reasonably well for me at work, for what I expect not a very uncommon setup.
That's why I'm sharing this in a blog post.

The project I'm working on is quite big, 10yr under development; large codebase and a complex build process.
The debug build results in a 1.2 GiB executable, all intermediate files generated by the compiler/linker are many, and big.
During build a lot of files are removed/(re)created/generated, so in general a lot of I/O happens.

Our build machines are extremely powerful, so it doesn't make sense to work on a local machine because of the build times.
That's why compiling happens on remote machines. I have worked remotely at a lot of companies, and usually I would simply use vim + a lot of plugins.
However, nowadays I'm accustomed to the power IDE's can provide, primarily navigation-wise (jumping to classes, files, finding usages, etc.) and simply don't want to work without a proper IDE.

This is my setup

I use an NFS mount (sshfs would suffice as well) where I mount from the remote to local, not the other way around, or compiling will be extremely slow.
In my opinion using file synchronization in these kinds of setups is too error prone and difficult to get right.

As a side-note; I've seen synchronization work moderately okay within a PHP project. But so far not in a C++ project where intermediate/build-files/libraries are first of all large and scattered throughout the project folder.

In my previous blog post we fixed fsnotifier such as in the previous image, but this also causes a new problem.

Lot's of I/O is slow over network mount

During compiling I noticed my IDE would hang, the only cause could be that it's somehow flooded by the enourmous lines of input it now receives from fsnotifier. Perhaps when we're working with the project files on a local disk the IDE wouldn't hang, because simple I/O (even just checking file stats) doesn't have network overhead.

Solution, ignore as much (irrelevant) I/O as possible

Here I made the fsnotifier script--that was at first just a simple proxy (calling the real fsnotifier via ssh)--more intelligent. It now filters out intermediate files generated by the compiler (.o, .d, and some other patterns).

Alternative solutions

The fsnotifier script outputs it's process id to /tmp/fsnotifier.pid and hooks two signals, so you can enable/disable it with a signal. Disabling will simply pause outputting all updates from the real fsnotifier (that is invoked via ssh).

Another extension you may find useful would be to make the buildscript touch a file like, i.e. /path/to/project/DISABLE_FSNOTIFIER and make the fsnotifier script pause itself (or behave differently) during the build until it sees for example the ENABLE_FSNOTIFIER file.

Simply disabling fsnotifier again doesn't fix the problem, CLion would keep nagging occasionally about conflicts with files that have changed both on disk and in memory. And when auto-generated files are being re-generated by the build, I want my IDE to reflect them immediately.

Fine-tuning your filter

The filter is just a bash/ksh function, so you can easily extend it with patterns appropriate to your project. The fun thing is you can "killall -9 fsnotifier", and Jetbrains will simply restart it. So no need to restart Jetbrains (and with that having it re-index your project). Debug the filters by tailing: /tmp/fsnotifier-included.log and /tmp/fsnotifier-filtered.log.

Update: 13th October 2016

No longer do I need to filter out *.o files etc. to get a better responsive IDE nowadays. The network improved (and perhaps it's something that improved in newer CLion versions).
Another change I did make to the script is based on the ROOTS that get registered (for monitoring the project path) use fsnotifier over ssh or not. (for local projects it would try to login via ssh otherwise, finding nothing and the IDE would hang at that point).

This should work for all their editors, PyCharm, Intellij, CLion, PhpStorm, Webstorm, etc.

The editor(s) use this tool to "subscribe" to changes on the filesystem. So if you change a file that's also in a buffer in for example CLion, it will know it needs to reload that file from disk in order to show the latest changes.

Without this tool it will fallback to periodically checking for changes or when a specific file is activated, I don't know exactly, but it's slower anyway.

You probably started searching for a solution because you saw this error in the console or in a popup in the IDE:

In this example I work locally on /projects/MyFirstProject, where it's /home/ray/projects/MyFirstProject on the server.
The super easy solution is to make sure your local path is exactly the same. In my case I made a symlink so I have /home/ray/projects/MyFirstProject both on my local- and remote machine.

On the local machine I can run the above ./fsnotifier example through ssh, lets test that (make sure you have ssh keys configured correctly for this, otherwise you will get an authentication prompt):

The fun thing is that the displayed files are actually already correct, so you don't need to do some any mapping. Just make sure you launch your IDE on the /home/ray/projects/MyFirstProject folder.
(Which the beforementioned fsnotifier-remote script should be able to do, but I encountered multiple issues executing it under Linux and I didn't like to dive into it's Python code).

You can log the communication between the IDE and fsnotifier over ssh by inserting this in the fsnotifier wrapper script: strace -f -F -ttt -s 512 -o /tmp/fsnotifier-debug.log (put it before the ssh command).
Then you can find stuff like this in the /tmp/fsnotifier-debug.log:

Setting up Nagios + Nagvis + Nagiosgraph on Ubuntu (14.04) can be a pain in the neck.

Default Ubuntu (14.04) ships with Nagios3, which is plain ugly and old, also
the Nagvis is pretty old and less user friendly.
So I created a Docker image with that install the—at the time of
writing—newest versions of Nagios, Nagvis, Nagios plugins and Nagios graph.
(Along with Apache2.4 + PHP 5.5 for the Web interfaces.)

I'm new to Docker, so leaving comments/rants/improvements is appreciated

TL;DR

docker run -P -t -i -v /your/path/to/rrd-data:/usr/local/nagiosgraph/var/rrd rayburgemeestre/nagiosnagvis
docker ps # to discover port
boot2docker ip # to discover host other than localhost (if you are using boot2docker on OSX)
open http://host:port # you will get a webinterface pointing to nagios/nagvis or nagiosgraph

Caveats with the install

For Nagvis you need a different broker called livestatus,
where both Nagios and Nagvis need to change their configs for, and you must specifically
configure it to support Nagios Version 4, otherwise you will get an error
starting Nagios. Specifically this one:

In the source root the --with-nagios4 flag is not propagated to it's
sub-packages. So I just make everything and then specifically clean the
mk-livestatus-xx package and re-configure with --with-nagios4, make, make install.

If I had to guess the livestatus configure script probably by default tries to
detect the Linux distrubition, and as Ubuntu 14.04 ships with Nagios 3 by
default it probably assumes to use version 3.

Directories in the container

Nagios, Nagvis and Nagiosgraph are all installed in subdirectories of /usr/local.

You are likely to want /your/own/rrd-data directory mounted as /usr/local/nagiosgraph/var/rrd inside the container,
so the RRD databases are not stored inside the container and retained after rebuilding/upgrading the container.
This is possible with the -v flag: docker run -P -t -i -v /your/own/rrd-data:/usr/local/nagiosgraph/var/rrd rayburgemeestre/nagiosnagvis

Don't forget that the docker user (uid 1000) has the appropriate read-write permissions on that rrd directory.

The problem

The title being a reference to this article from 2011, a blog post from someone who encountered a similar issue once . Hopefully my blog post will prevent someone else from spending a day on this issue. We are in the middle of a migration from Oracle 11.2 to 12.1, and from PHP, Zend server more specifically, we had some connectivity problems to Oracle, the PHP function oci_connect() returned:

Good luck googleing that Oracle error code, nothing hints in the right direction, only that it's an error that occurs after the connection is established.
Quote from http://ora-28547.ora-code.com/:

A failure occurred during initialization of a network connection from a client process to the Oracle server: The connection was completed but a disconnect occurred while trying to perform protocol-specific initialization, usually due to use of different network protocols by opposite sides of the connection.

The problem in our case was with the characterset

The "tl;dr" is: you may be using an Oracle "Light" client instead of the "Basic" client. In Zend Server this means that in the Zend Server lib path some libraries are missing. The Light client only supports a few charactersets. If you have some other Characterset that isn't default, that may be the problem. You need to make sure the Oracle Instant client Zend Server is using is the Basic client.

Unfortunately you cannot tell this from the phpinfo() output. Both Light and Basic return exactly the same version information.

How we found out..

Luckily I was able to succesfully connect from another virtual machine to the new database server. This was an older Zend server instance, where the Oracle instant client was patched from 11.1 to 11.2. The Zend server that failed had 11.2, so we assumed patching wasn't necessary. I compared the strace outputs.

The first observation was that during the communication the server--on the right in the following image--stopped and concludes there is a communication error.

Working VM on the left, failing server on the right.

The second observation in the diff was that there was also a difference between libraries loaded.

More insight into the problem..

We didn't specify explicitly what characterset to use for the connection, so it will try to find out after the connection is established. We use WE8ISO8859P15 in our database, and that charset is (amongst others) provided by libociei.

Had we specified the charset in the oci_connect parameter (fourth param) we would have seen:

PHP Warning: oci_connect(): OCIEnvNlsCreate() failed. There is something wrong with your system - please check that LD_LIBRARY_PATH includes the directory with Oracle Instant Client libraries in /home/webro/test.php on line 4
PHP Warning: oci_connect(): ORA-12715: invalid character set specified in /home/webro/test.php on line 4

That would have hinted us to the solution earlier. Also in strace there would have been no connection setup at all, as the client can now bail sooner with "Invalid character set specified". Apparently with the Light oracle client version 11.1 the error used to be more helpful (see beforementioned blog post here):

Problem

The symptom is that while editing the IDE freezes, somehow the keyboard no
longer responds. Two years ago at Tweakers there was luckily someone using
Ubuntu who could tell me right away how to fix it ("Just killall -9
ibus-x11", and it would magically continue to work). Now more recently at
AutoTrack--where I now work--a collegue encountered the same issue. Luckily I
knew the fix this time.

The fact that I personally know at least five different individuals who spent
time fixing this makes me guess there are a lot more people still. Hence this
blogpost with some keywords that will hopefully lure others into this fix...

My super awesome NVIDIA Quadro K600 doesn't work properly with the default video drivers in Linux mint 15, 16 or Ubuntu 13.10.
Especially in mint it was especially unstable. In Ubuntu everything seems fine for a few days, until the GPU finally crashed as well.

Linux mint 15 / 16

You disable the default driver called nouveau, to be loaded by the kernel with nouveau.blacklist=1.

In mint I've tried editing GRUB_CMDLINE_LINUX etc. in /usr/share/grub/default/grub and all of /etc/grub.d/*. Somehow update-grub didn't parse it, I was not so patient, so I ended up simply editting /boot/grub/grub.cfg.

Ubuntu 13.10

In ubuntu I attempted to directly edit /boot/grub/grub.cfg again. Adding the blacklist parameter, somehow this failed, the NVIDIA installer still complaining about nouveau being loaded.

So I attempted the 'normal approach' again: vim /etc/default/grub, modified this line: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset nouveau.blacklist=1".
I also googled and found this answer on stackoverflow, suggesting the nomodeset is necessary as well. (So I added both). sudo update-grub and EAT REBOOT INSTALL REPEAT.

Somehow linux mint f*cked up my boot to windows 8 partition. It had some problems recognizing my partition table or something. (At work I have the exact same setup, and there were no problems.)
I ended up fixing it with the above command, and from windows (had to restore an image) using this tutorial that uses EasyBCD.

In shell scripting I prefer the Kornshell. A while ago I experimented with "oh my zsh", but I switched back to ksh. Their auto completion for program commands is really unsurpassed (tab completion on program parameters for grep for example). The auto-incorrect however, is quite annoying . There is also a git plugin that visualized the active git branch in the $PS1 prompt. I liked these features and I want to add them to ksh.

Separated history amongst the different ksh shells.

My problem with the default behaviour: by default history is globally shared amongst all shells. I tend to work in a screen and do different stuff in each buffer. So it's annoying if stuff from one buffer magically appears in the other buffer.

So what I do is I make sure there is one history file ~/.ksh_history which contains all history from all shells. When starting a new shell I copy this file into a history file specific for that ("sub")shell, i.e. ~/.ksh_history_files/<shellpid>. Each new shell does this.

When a shell is started all history files from ~/.ksh_history_files/* are merged back into ~/.ksh_history. And the ones that are no longer in use (shells have exited) are removed. This is done with a simple lsof call.

Commands are processed through a simply ksh function that makes sure all history lines are unique, without changing the order.

Some funny caveats were:

A history file should start with the character sequence \x81\01.

Each command in history file should end with a \x00 character.

A prompt that embeds current git branch

If you are inside a git clone).Maybe the oh-my-zsh git integration is more advanced, no idea. Luckily this visualization in $PS1 is very fast.

Integrated my "launcher tool" I wrote for windows a long time ago.

The browser is assumed to be "chromium-browser" (sudo apt-get install chromium-browser).

The result for gi dan flavin is opened in a new chromium tab.

i <url> - open url in browser

g <search terms> - search with google.com

gi <search terms> - search with google images

gv <search terms> - search with google videos

gnl <search terms> - search with google.nl

gs <search term> - search google scholar

w <search term> - search wikipedia

wa <search terms> - search wolfram alpha

yv <search terms> - search youtube

y <search terms> - search yahoo

yi <search terms> - search yahoo images

yv <search terms> - search yahoo videos

imdb <search terms> - search imdb

h <search term> - search hyperdictionary

v <search term> - search vandale (dutch dictionary)

Usage example: Open run menu (ALT+F2). Then type gi dan flavin to get the example result from screenshot.

I had to abandon my pure-ksh-functions approach to make the launcher commands available everywhere.
So you have call install_launcher as root to install the shortcuts as scripts in /usr/local/bin/.
Ubuntu does not respect shell functions in run unfortunately.

The performance problem

Opening phpmyadmin becomes quite annoying if the initial loading takes around 30 seconds, after logging in.
Viewing the queries being executed by phpmyadmin with meta log monitor easily shows the bottleneck is "SHOW TABLE STATUS FROM '<DATABASE>'".
It is the only query requiring > 14 seconds in this case.

Phpmyadmin executes this query twice: in it's navigation frame for a listing of tables and in the right frame for the detailed listing.

Create cache for: SHOW TABLE STATUS FROM <DATABASE>

The output of this query is not that accurate. Running it twice, and comparing the output shows some columns are estimates.
So if we would cache it for two minutes that would probably not harm phpmyadmin.

Then I created a table where to store the SHOW TABLE STATUS output into (I realize the types I used are probably overkill for some fields ).

This doesn't work by the way:
INSERT INTO showtablecache SHOW TABLE STATUS FROM '<DATABASE>' (leaving the "Database_" column from above CREATE TABLE).
I didn't really expect that to work, but I would have been pleasantly surprised if it did .

I created a resultset with similar output to the SHOW TABLE STATUS query with a SELECT query on INFORMATION_SCHEMA.TABLES.
And perform INSERT's with the following cron:

Some notes

I never did any Lua scripting before, I find this website quite useful as it provides a console for Lua (but for other languages as well).
This way you can test some stuff without constantly restarting mysql-proxy.

With regards to the caching, if you create a new table, this table will become visible once the cron updated the cache. So you don't want to set the delay for too long.
You could do some extra scripting for encountered CREATE TABLE statements in mysql-proxy, or make a more advanced cron script that checks the faster "SHOW TABLES;" more often to see if any caching needs more priority.

The New KornShell

I prefer ksh (Version JM 93u+ 2012-02-29) for my shell (with "set -o vi"). Not that it's soooo much better than bash as a cli (and it's probably pwnd by some of zsh's features like programmable autocomplete). But I do find it alot cleaner than bash for scripting.
Anyway, currently I've given all machines I work with a /bin/ksh and chsh'd it for my user, but I noticed I missed bookmarking the current directory with "pushd ." for returning to it later with "popd" (after i.e. some subtask that will make you endup /nowhere/near/your/old/far/away/path).

Sometimes you just don't want to open a new tab in screen. (And you are right, you could of course also use goto.cpp for bookmarking )
An alternative solution would be starting- and exiting a subshell.

Found: dirstack.ksh by Eddie

So I googled and found this stack overflow answer which has a pretty nice pushd/popd/dirs implementation. But it behaves a little different: "pushd /usr" bookmarks and changes to that directory (the normal behaviour).
But what I often want is to store a directory right before I'm about to leave it. (Chances are I didn't use "pushd" but "cd" to get there in the first place.) Normally you simply use "pushd ." to put it on the stack (and ignore the useless changedir on the side ).
But this implementation is explicitly designed so that the current directory is already (and always) the first (or zeroth position) on the stack and from that line of thought it would be "useless" to put it as a "duplicate" in the list.

I still want to use pushd on $PWD, so I commented these four lines in dirstack.ksh:

Then I remembered my book (that has caused me many jealous colleagues :+) also provided pushd and popd as examples in the appendix .

So I was curious to see if these were usable (the book is from 1995).

Found: fun/{pushd|popd|dirs} by David G. Korn himself*

* by my guess

SuSE provides these scripts in /usr/share/ksh/fun so I didn't need to type them in.
If you need them, I tarballed them into kshfun.tar.gz (md5=7173831211d3d54f26f630f3cc720282).
I was surprised by /usr/share/ksh/fun/dirs, they re-alias "cd" with a custom "_cd" function that does basically "pushd" all the time, with a stack of max. 32 dirs.
That's a cool idea, you can view your stack with "dirs" or even use the "mcd" (menu change dir) command.
You use this "cd" alias like "cd N" as well, where N is the index on the stack (given by "dirs" output). And pushd and popd work on this same stack.

For your .kshrc profile:

. /usr/share/ksh/fun/popd <<<< includes pushd as well **
. /usr/share/ksh/fun/dirs <<<< if you also want the custom cd + dirs