For this tutorial I'm assuming Kubernetes with Helm + Ingress is already deployed. If not, I still included the commands I used near the end of this article.

OwnCloud

My NAS is running at home with Rockstor, and I'm using RAID10 with btrfs. Rockstor has (docker) apps support with the feature called Rock-on's, they also include OwnCloud, but after updating and some other issues with Rockstor at some point my deployment broke.
This frustrated me so I've decided to switch to Kubernetes instead.

I use my own cloud (no pun intended) as an alternative over using services owned by Google/Amazon/Apple.
When you plan to do the same, just make sure to also make proper backups.

Deploy OwnCloud with Helm

Following the instructions; copy their default values.yaml (from here). Tweak all the values. It seems important to define a hostname! (If you try accessing the service later via IP address, the webinterface will not accept this.)

Notes: owncloud.yaml is my values.yaml, and I expect the rbac.create=true not to be needed but I used it anyway it was left over when copy & pasting another command.. For convenience you can download my owncloud.yaml.

If you redeploy Kubernetes and/or the system in general, I forgot when exactly but a PersistentVolume may end up in a state that prevents PersistentVolumeClaim's to not bind to the Volumes.
There was a trick to force it to bind, IIRC kubectl edit pv kube-owncloud-storage-data and you can remove the reference it has to an existing PVC. But it was a few weeks ago I experimented with this so sorry I don't remember the details.
Only now I stumbled upon my notes and decided to wrap it up in a blog post.

They took me two hours of debugging, owncloud was throwing errors 413 Request Entity Too Large when syncing some larger video files from my phone to owncloud. Thinking this must be an issue inside owncloud I experimented with lots of parameters, fixes for php, apache, etc. Then realized it could be the Ingress in Kubernetes. The above example makes sure it doesn't block uploads up to half a gigabyte.

I remember having something like this as a child: http://www.endlessfoldingcard.com/endless-folding-card-p-768-918.html.
It was not that one but something similar, a flyer for some stupid product. I was fascinated by it and could keep folding it for hours.
It was nice made out of some very strong paper/cardboard, unfortunately I probably got rid of it at some point.

It took me a long time to find it again, every now and then I would try to look it up on Google (with years in between), unable to find it. Until I had a moment of clarity and tried the additional keyword "Endless", and finally found something that I remembered.

Figuring out how it works

All the YouTube videos I found would basically fold the thing together, and then decorate the card afterward, however I wanted to print it beforehand in such a way that it would turn out nicely when folded.
To be honest this one does attempt to explain some of the layouting, but it wasn't clear enough for me.
This is another video that shows how to fold it clearly.

There are a few things that you will notice when you fold one, some parts of the card stay constant for example, and not all views have corners.
Anyway I decided to treat it as a grid of tiles. I just printed 2 two-sided pieces of paper with unique numbers for each tile.
Then I deciphered which tiles end up where after folding, and which ones are rotated 180 degrees. See madness below.

Design and print the card!

Designing the card in something like gimp is a lot of work of course, and it would be hell if you need to prepare it manually for printing.
Luckily I wrote a C++ program that uses the very cool Selene Image library, which I learned about via the Dutch C++ User Group ,
Michael (the author) gave an awesome lightning talk about it. It was good timing because a few days after that I needed to write this program.

Input & Output

As you can see the resulting images look a bit weird, but when printed double sidedly, card A and B on one side, and card C and D on the other side of the paper, you can cut them out and fold them.

How to fold?

This is how I did it, I didn't plan to make a blog post so I didn't completely document every step. But at least on this picture should give you an idea:

For folding I bought a "Scor-Pal", I can recommend this tool it makes nice folds, doing without would give crappy results on thick paper IMO.

The polar bear piece of paper is side A & C, and cut in the middle already, and two horizontal pre-folds are made (sloppy ones though ).
The other two sides C & D are pre-folded as well, but I didn't cut it horizontally yet.
After cutting, glue the white edges on sides B & C together, and have it dry, the card should be done.

Conclusion

After having folded about 60 of these I finally got the hang of it and I could produce pretty slick cards.
One thing I did was print myself with a color laser printer on very thick paper, this gave "meh" results, the toner had trouble sticking to the paper especially after folding.
I recommend doing the printing at a specialized shop, maybe to avoid the toner letting loose, but also because aligning both sides is important. Not all printers are great at this I've discovered, especially if you have to use the manual feed for the thick paper.

What worked best is this order:

Printing both sides

Cut out the two sides with something like

Do the (in total four) pre-folds on both pieces of paper (do them in the right direction)

Cut the first one vertically, perfectly in the middle.

Cut the second one horizontally, perfectly in the middle.

Put glue on the corners for the second one now (the one cut horizontally)

Then align those two pieces perfectly so it appears whole again.

One by one add the other (the one cut vertically) two pieces, you'll find with the glue still wet it's easy to adjust.

When done, let it dry, and later on pre-fold some more for the definitive card.

I'm a heavy user of scratch pads with i3, I often don't like the dimensions of a window after you make them floating. As do other people, see here and here2.

I've used a customized version of the solution proposed in one of the comments by the creator of i3-gaps (Airblader) here3.
This has served me well, but one thing bugged me when using multiple monitors it wouldn't center the window correctly, so I made a Python script that first uses Qt to get all screen dimensions and determine the correct offset based on the Mouse position.
It's a bit overkill probably, but it works, so I'm happy with it.

Note that if you update your system in the meantime, it may have to be recompiled at some point, I've experienced this with the lsw command which is using some X calls that changed after I updated from Ubuntu 17.04 -> 17.10.

You may need to install some python3 -m pip install ... when you try to run it you'll probably discover what exactly, I forgot to keep a requirements.txt.
From what I remember you need at least: python -m pip install pyuserinput pyqt5 python-xlib

Step 3: modify resize mode in i3

Probably you already have a "resize" mode, just add something like SHIFT+J and SHIFT+K to that mode to call the python script:

This is probably going to be a slightly weird post, but this is about something what works really well for me, and I actually believe other people could like it

I like to get notifications when..

Compiling something finally finishes

Rebooting/provisioning some machine finishes

Someone comments on my Pull request

Someone mentions me on some JIRA ticket

I broke the build

But notification popups suck! You have to confirm them..., you cannot let them auto-hide (as you might miss an important one),
they appear on the monitor/workspace you're not actively looking at, etc., you may even be forced to use a mouse (Yuck!).

MyThe solution is to temporarily colorize the screen for getting your attention using a modified redshift!
Different shades of red are nice, but I am now using all colors of the rainbow with an extra parameter I've added -f:
The color itself can thus tell you what's up.

A few examples

The color shift is not captured with a screenshot, so I had to take photos with my Phone.
For each photo the caption shows the corresponding redshift parameters used.

redshift -O 2500 -f "multiply 1.0:0.5:0.5"

redshift -O 2500 -f "multiply 0.5:1.0:0.5"

redshift -O 6500 -f "multiply 0.2:0.2:1.0"

My actual workflow

My background scripts currently change colors automatically to..

RED - for general notifications (the message I find in the i3 statusbar)

GREEN - in case build succeeded

YELLOW - in case build FAILED

BLUE - in case mentioned on JIRA

BLUE - in case of activity on Pull Requests involved.

INVERSE - in case I broke the build!

So the workflow is basically the color change tells me what's up, and I confirm it with WINKEY+LALT+SPACEBAR.
With this I mean, I restore my screen color to the original state.
For example I got a green overlay, which means I can switch back to some other workspace and continue testing what just finished compiling there.

More super handy usages!

Battery monitoring: Paint screen almost entirely red when: Battery remaining <= 5% AND Charger disconnected.
This saved me quite a few times. I'm using the i3 window manager and it's easy to miss the battery indicator if you have some tile toggled to fullscreen (an i3 feature, which also hides the i3 status bar).

The "inverse" effect can make web browsing easier on the eyes at night, which is why I assigned it a separate keyboard shortcut.
Most browsers have plugins that enforce darker colors for websites, Dark Reader does a great job for Chrome for example. However, when transitioning between pages or switching tabs, there will be a "flickering" effect (see thousands of complains here and here) which will completely destroy your eyes.

Conclusion

I'm using this modified redshift (it's only one extra commit) and some other productivity hacks for a while now.
You can install my fork with the following steps:

The -f multiply 0.5 makes the screen 50% darker, which is just a personal preference.
With some small effort you could control the 0.5 value with other shortcuts, but that was not the point of this blog post.

To be continued..

Finally, I was able to attend this conference, missing out two years in a row, and it was great.
So far it has been the largest yet with 600 attendees, and AFAIK Bjarne Stroustrup was present for the first time this year.

I went to Berlin with my girlfriend two days before the event so we had a chance to see Berlin.
Even though the weather was very much what you would expect around this time of year, cloudy, rainy, etc.
we had a great time. Especially renting bikes and sightseeing.

[Image caption] Brief moment of no-rain..

Talks I attended... DAY 1

Opening Keynote - Bjarne Stroustrup

What is C++ and what will it become? It was a nice presentation showing the strength of C++ and providing a little history here and there (like code below). Funny quote from the presentation "Only a computer scientist makes a copy then destroys the original"; The committee has a difficult task, making the C++ language less complex, but the only thing the committee can do is add more to it , but they still succeed (i.e., with auto, constexpr, ..).

Boris Schäling asked "Scott Meyers retired from C++ a year ago; do we need to be worried about you?", luckily we don't have to worry ;-). Bjarne answered that he tried a few times to quit C++ in the past, but apparently he is not very good at it .

Learning and teaching Modern C++ - Arne Mertz

The speaker made an interesting point regarding some pitfalls, i.e. that many C++ developers learned C first, pointers, pointer arithmetic, C++03, C++11, .., basically a "layered evolution". However Modern C++ isn't a layered evolution, rather it is a "moving target". Nowadays we prefer make_unique, unique_ptr and therefor why not postpone teaching new, delete, new[], delete[], pointer arithmetic etc. when teaching Modern C++? The same goes for C-style arrays, more complex to teach as opposed to std::array.

Actually kind of sad news; there are still schools in some Countries where C++ is taught with Turbo C++ (see this SO question from a few days ago) compiler (which is extremely outdated). Other notes I scribbled down were for me to check "clang tidy" and adding "isocpp.org" to my RSS feeds.

Wouter van OOijen--a professor teaching C++ in the context of embedded devices--made a good point: the order in which material is presented to students is the most difficult thing to get right. In most books on C++ the order doesn't make sense for embedded, that's why he creates his own material.

This was quite interesting, maybe it was just me but in the beginning of the presentation it wasn't clear to me what an Entity Component System was, it became clear to me during the talk though.
He walked us through the implementation, advanced templating, lambdas, bit fiddling, all quite interesting, maybe a bit too much content for one presentation but very impressive stuff.
The room temperature during the presentation was extremely hot, making it sometimes difficult to concentrate and the talk went a bit over the scheduled time.

Some stuff I found interesting: the usage of sparse sets, the use of proxy objects to make sure that certain methods of the library cannot be called at the wrong time.

ctx->step([&](auto& proxy)
{
// do something with proxy
});

He went through a large list of features and how they are implemented

Ranges v3 and microcontrollers, a revolution -- Odin Holmes

Quite an awesome talk this one, the speaker is extremely knowledgeable on meta programming and embedded programming.
His company works with devices with very little memory (just a few kilobyte) and this talk was very forward looking.
There was a crash course regarding limitations for such devices, there is limited stack space, how do exceptions and interrupts play along with it.

He then started with real demo/hello world for such a device and demonstrated how even that small code contained bugs and a lot of boilerplate.
The rest of the talk he showed how to improve it, like instead of parsing (dangerously) with scanf (you can overflow the buffer, so you need a "large enough" buffer up-front... "And we all know that coming up with a size for a large enough buffer is easy, right?" ) can be replaced with a statemachine known at compile time.
Ranges can be applied to lazy evaluate input, and as a result it would consume only the minimal memory.

C++ Today - The Beast is back - Jon Kalb

Why was C/C++ successful? It was based on proven track record, and not a "pure theoretical language".
High-level abstractions at low cost, with a goal of zero-abstraction principle. In other words; not slower than you could do by coding the same feature by hand (i.e., vtables).

If you like a good story and are curious about why there was a big red button on the IBM 360, the reason behind the C++ "Dark ages" (2000 - 2010), where very little seem to happen, then this is the presentation to watch.
Spoiler alert: cough Java cough, OOP was the buzzword at the time, it was "almost as fast", computers got faster and faster, we "solved the performance issue"!

Interesting statements I jotted down "Managed code optimizes the wrong thing (ease of programming)", and regarding Java's finally (try {} catch {} finally {}): "finally violates DRY". He then asked the audience a few times what DRY stands for, which is quite funny as some people realize they were indeed repeating themselves, not all as someone else yelled "the opposite of WET" .
He also "pulled the age card" when discussing Alexander Stephanov (the author of the STL) "You kids think std::vector grew on trees!".

DAY 2

Functional reactive programming in C++ - Ivan Cukic

Talk of two parts, first functional programming: higher order functions, purity, immutable state. Functional thinking = data transformation. He discussed referential transparency, f.i. replacing any function with its value should produce the same outcome. This can depend on your definition.

int foobar()
{
std::cout << "Returning 42..." << '\n';
return 42;
}

Above function when used in int n = foobar(); can be replaced by 42, and the line of code would result in exactly the same thing (n containing 42), however the console output won't be printed. Whether you consider std::cout to count as part of the referential transparency is up to you.

He continued with Object thinking = no getters, ask the object to do it. "Objects tend to become immutable.".
I will have to review the presentation to get exactly what was meant by this.

Next: reactive programming, if I am correct this was his definition:

responds quickly

resilient to failure

responsive under workload

based on message-passing

Note: reacting not replying, i.e., piping Linux shell commands there is only one-way data flow.
To conclude, some random notes I made during his talk below.

This talk was probably one of the most well attended talks at the conference. The room was packed.
Coming in slightly late I had to sit down on my knees for the entire talk.
Which was worth it, I think I liked this talk most of all I attended.
It was just the right mix of super interesting material and practical advice.

Coming from Amsterdam where Automated Trading companies seem to kind of dominate C++,
it has always been very mysterious what exactly it is they do.
It felt to me like it was basically the first time the veil was lifted a little bit.
It's just amazing to hear how far they go in order to get the lowest latency possible.
Within the time it takes for light to travel from the ground to the top of the Eiffel tower they can
take an order, assess whether it's interesting or not, and place the order... times ten!

Really interesting talk to watch whenever it comes online, it shows the importance of optimizing hardware,
bypassing the kernel completely in the hot path, staying in user space for 100%, this includes network I/O (f.i., OpenOnload), cache warming, beware of signed/unsigned conversions, check the assembly, inplace_function (the speakers proposals, stdext::inplace_function<void(), 32>), benchmarking without the 'observable effect' by observing network packets, and more.

One note regarding Network I/O for example; if you read a lot but very little is interesting to the hot path, you may negatively affect your cache.
A solution would be to offload all the reads to a different CPU and cherry-pick only the interesting reads and send them to the "hot" CPU.

Well, I was a bit tired at this point, so I cannot do the talk justice with a very thorough summary.
Even if I could it's better to watch it from Michael Wong himself, because the slides help a lot in understanding the story.

I did learn a few things, maybe the first lesson for me is to try stay away from all of this..
Still, aside from being super complicated, it's also an interesting topic, and good to know more about.
The ABA problem: he had good slides that visualized actually step-by-step the challenge of updating data in a multi-threading situation, having readers while writing to it, all wrapped in a fun story of Schrödingers Cat (and Zoo).
Solutions discussed were hazard pointers and RCU (Read Copy Update).

The gains you can get by starting late, having a grace period so you can do multiple updates at the same time are interesting to learn about. Situations where "being lazy" actually pays off!

Lightning talks!

Surprise! They had secret lightning talks planned. To be honest at first I thought it was a bit long to have 1 hour and 40 minutes planned for a Meeting C++ update/review, so this was a nice surprise.
My favorite lightning talk was from Michael Caisse reading from the standard as if it were a very exiting story, hilarious.
Second James McNellis' "function pointers all the way down" (like "Turtles all the way down", actually Bjarne also had a reference to this in his keynote).
The remaining lightning talks were also very good: Michael Wong, Jens Weller, Chandler Carruth, and Bjarne's.
The latter on Concepts was quite interesting; "what makes a good concept?" It has to have semantics specifying it, which in practice seems to be an efficient design technique. Quite funny was his "Onion principle" on abstractions (IIRC?), "you peel away layer by layer, and you cry more and more as you go along" . Also Jens talk was really fun, it started with end of the world scenarios, working towards the future C++ standards.

C++ metaprogramming: evolution and future directions - Louis Dionne

The closing keynote was a really clear and relaxed presentation of how meta programming evolved,
and in particular how boost::hana did. Again a nice lesson of history where Alexandrescu's Modern C++, boost::mpl, boost::fusion and the like all passed the revue. He showed what you can do with boost::hana at compile-time and runtime. His talk really opened my eyes on using constexpr, integral_constant, differences in meta programming with types and objects, and a lot more. It's amazing what his library can do. He argued the world needs more meta programming, but less template meta programming and concluded by sharing his view for the future.

The conference

There was a fun quiz, with really difficult puzzles (C++ programs) that had to be solved in < 3 minutes each.
This was basically similar to peeling Bjarne's Onion.. but in a good way.

Between talks there were lunch-break Meetups planned (each 20 minutes, each had a specific topic). I attended two and my view is that it's a great idea, but the fact people have to come from talks, and leave on time to catch the next one, sometimes caused the time to be way too short (or yourself missing out on a talk because the room is now full).

The organization was super, the drinks and food, especially the second day. The Andel's Hotel is a really good location, the Hotel as well (if you are lucky enough to get a room there). For me it was all really worth the money.

Personally I like to write down a summary for myself, but I hope this blog post was also a fun to read to someone else!

Using my docker images (master, slave) and helper scripts on github, it's easy to get Cloudera Manager running inside a few docker containers. Steps: get most recent docker, install (GNU) screen, checkout the repo, in there do cd cloudera, bash start_all.sh. This should do it. Note that the image(s) require being able to invoke --privileged and the scripts currently invoke sudo. After running the script you get something like (full example output here).

Not really in the way docker was designed perhaps, it's running systemd inside, but for simple experimentation this is fine. These images have not been designed to run in production, but perhaps with some more orchestration it's possible .

Step 1: install Cloudera Manager

One caveat because of the way docker controls /etc/resolv.conf, /etc/hostname, /etc/hosts, these guys show up in the output for the mount command.
The Cloudera Manager Wizard does some parsing of this (I guess) and pre-fills some directories with values like:

In case you are looking for a free alternative to Camtasia Studio or many other alternatives...
One of my favorite tools of all time, ffmpeg can do it for free!

The simplest thing that will work is ffmpeg -f gdigrab -framerate 10 -i desktop output.mkv (source)
This gives pretty good results already (if you use an MKV container, FLV will give worse results for example).

HiDPI: Fix mouse pointer

gdigrab adds a mouse pointer to the video but does not scale it according to HiDPI settings, so it will be extremely small.
You can configure the mouse pointer to extra large to fix that. That mouse pointer won't scale either, but at least you end up with a regular size pointer in the video

Optional: Use H264 codec

More options you can find here, I've settled with single pass encoding using -c:v libx264 -preset ultrafast -crf 22.

Now I know the device name I can use for audio is "Microphone (Realtek High Definition Audio)". Use it for the following parameters in ffmpeg -f dshow -i audio="Microphone (Realtek High Definition Audio)".

This is a resulting video where I used this command, resolution of the video is 3840x2160 and the HiDPI scale is set to 2.5.

Update 1> Add more keyframes for better editing

For this I use the following command, to insert a keyframe every 25 frames (the closer to one, the larger the output file will be):

ffmpeg.exe -i %1 -qscale 0 -g 25 %2

The option -qscale 0 is for preserving the quality of the video.

(Changing the container to .mov was probably not necessary, I tried this hoping that Adobe Premiere would support it, but it didn't!)

Update 2> Editing 4K on Windows 10...

Found the following tool for editing: Filmora and (on my laptop) it was able to smoothly edit the footage. They support GPU acceleration, but the additional keyrames really help with a smooth experience.

Once you get the hang of it (shortcut keys are your friend) it's pretty easy to cut & paste your videos.

Update 3> Support Adobe Premiere

As I discovered Adobe Premiere earlier, doesn't like MKV, but it also doesn't like 4:4:4 (yuv444p), the pixel format used by default (it seems).
You can view such information using ffprobe <VIDEO FILE>. Anyway, it seems to like yuv420p, so add -pix_fmt yuv420p to make it work for Premiere:

A crazy idea, building a profiler/visualizer based on strace output. Just for fun.
But, who knows there may even be something useful we can do with this..

The following image shows exactly such a visualization for a specific HTTP GET request (f.i., to http://default-wordpress.cppse.nl/wp-admin/index.php (URL not accessible online)).
The analysis from the image is based on the strace log output from the Apache HTTP server thread handling the request. Parameters for the strace call include -f and -F so it includes basically everything the Apache worker thread does for itself.
(If it were to start a child process, it would be included.)

This request took 1700 milliseconds, which seems exceptionally slow, even for a very cheap micro compute instance. It is, I had to cheat a little by restarting Apache and MySQL in advance, to introduce some delays that make the graph more interesting. It's still still normal though that strace will slow down the program execution speed.

I grouped all strace lines by process ID and their activity on a specific FD (file descriptor).
Pairs like open()/close() or socket()/close() introduce a specific FD and in between are likely functions operating on that FD (like read()/write()).
I group these related strace lines together and called them "stream"s in the above image.

In the image you can see that the longest and slowest "stream" is 1241 milliseconds, this one is used for querying MySQL and probably intentionally closed last to allow re-use of the DB connection during processing of the request.
The three streams lower in the visualization follow each other sequentially and appear to be performing a lookup in /etc/hosts, follewed by two DNS lookups directed to 8.8.4.4.

Why are we doing this? (Other than because it's Awesome!)

This works for any strace output, but my idea originated while doing web development.
This was for a relatively complicated web application, that was divided in many sub-systems that communicate mostly via REST calls with each other.
All these systems had lots of external calls to other systems, and I wanted a view where I could see regardless of which
sub-system or actual PHP code being executed, how the performance was for
specifically: I/O with (i.e. for i18n/locale) files, scripts, SQL queries to MySQL, Oracle, the REST API
calls to system X, Y & Z, Redis, Memcached, Solr, Shared memory even and Disk caching.

If only there was a tool really good at capturing that kind of I/O... ahh yeah there is, strace!
I switched jobs 7 months ago, before applying my strace tool to this code-base, but I've applied it to similar complex applications with success.

We already had tools for (more traditional) profiling of PHP requests.
Quite often the interpretation was difficult, probably because of a lot of nasty runtime reflection being used.
Also when you needed to follow a slow function (doing a REST call) it was a lot of effort to move profiling efforts to the other system (because of OAuth 1.0b(omg..), expired tokens, ..).
Nothing unsolveable of course, but with strace you can just trace everything at once on a development environment (especially in Vagrant which we used), spanning multiple vhosts.
If it's just you on the VM, perhaps you can strace the main Apache PID recursively, I didn't try that however, but I think that would work.

Products like NewRelic provide dashboards for requests where you can gain such
deep insights, "off the shelve", basically, but the downside is that it's not cheap.
NewRelic f.i. hooks into Apache & PHP and has access to actual PHP function calls, SQL queries, etc.strace cant do that, because it only sits between the process(es) and the Linux kernel.

First, let's take one step back & properly parse the strace output..

It quickly became apparent that I couldn't get away with some trivial regex for parsing it, so I turned to bnfc and created the following BNF grammer to generate the parser.
I was quite suprised that this was so easy that it took me less than a working day to find a tool for the job, learn it and get the grammer right for some strace output.

With this tool you are provided with an autogenerated base class "Skeleton" which you can extend to create your own Visitor implementation.
With this pattern it becomes quite easy to extract some meta-data you are interested in.
I will show a simply example.

The grammer

I came up with the following grammer that bnfc uses to generate the Parser.
Reading it from top to bottom is more or less the way you can incrementally construct this kind of stuff.
You start really small; first chunking multiple strace-lines into single strace-lines, then chunk strace-lines into
Pid, Timestamp and (remaining) Line. Then further specify a Pid, the Timestamp, Line, etc., slowly making the grammer more coarse-grained.

No matter how nested these lines get, it will parse them as long as I didn't forget anything in the grammer. (So far it seems to be complete to parse everything.)

Visitor example

Using the BNF grammer, the above structure and occasional peeking at the generated Skeleton base class, you can simply override methods in your own visitor to do something "useful".
The following visitor is a less "useful" but simple example that outputs all the strings captured for strace lines containing the open() function.
Just to illustrate how you use this Visitor.

Opposed to a simple Visitor like this example, I parse all the lines, prepare a JSON representation for each line and store that in ElasticSearch.
This way selecting and filtering can be done afterwards. And also ElasticSearch is really a fast solution in case you want to do more complex queries on your log.

A Proof of concept for Web

This time at the beginning of each request I have PHP instruct some script to run a strace on the process id for the current PHP script's pid (or rather the Apache worker's) and all it's (virtual) threads and sub processes.
(If I would track the Request accross the stack with "Cross application tracing" you can even combine all the relevant straces for a given request. I didn't implement this (again) because of I switched jobs. (Info on Cross application tracing in newrelic).
This is even relatively easy to implement if you have a codebase where you can just make the change (like inject a unique id for the current request in curl call for example).)

The following image and code shows how I capture straces from specific PHP requests, like the wordpress example I started this blog with.
You can skip this part. Eventually these straces are linked to a specific request, ran through a slightly more elaborate Visitor class and fed into ElasticSearch for later processing.

(This omits also some other details w/respect to generating a UUID for each request, and keeping track of what strace outputs are related to each request.)

This way you end up with .strace files per process ID (it should probably include a timestamp too).
The long running process removes the file the client checks from the todo folder as soon as it started strace.
That way the client will no longer block and the interesting stuff will be captured.
It uses a shutdown handler to instruct the long running process to stop the capture (the Apache thread won't exit, it will wait for a next request).

Final step, To ElasticSearch!

I use a Visitor and my strace parser to create JSON representations for the strace log lines. Containing the meta-data I need:
file descriptors, an array with all strings, a timestamp that ElasticSearch can understand out of the box, etc.

To get to my previous example, I can use cat test.log | ./strace-output-parser elasticsearch localhost 9200 strace_index to import the parsed lines to ElasticSearch.

In above example I use filtering with a plugin called "head" to basically make the same selection as I did with the simple visitor example. I also highlighted one specific line to show the JSON representation.

I used PHP for processing the wordpress strace output from ElasticSearch and generated the visualization from the very first image in this blog post.
You can view the HTML output here.

Hopefully this blog post was interesting to read, and maybe you find some use for the strace parser yourself. If you do, please let me know, that would be fun to know .

Most people are probably familiar with gdb, and Ribamar pointed out to me there is also a ncurses frontend inside gdb.
But in case anyone is interested I learned that NetBeans also supports remote debugging. Even though it's not the most modern IDE in the world, and it's vi emulation is cumbersome , it seems to have pretty good support for remote-debugging.
It will just login to some machine via ssh (i.e., dev11 or a real cluster), and issue gdb <something> and wrap around it. If you make sure it knows where the sources files are on your development machine, you can use all the step-debugging features.

The only downside is that loading up cmd in gdb takes a while probably ~ 30 seconds. Still it's a lot faster than debugging with print-statements and recompiling.
For cmsh it's already a lot faster and on top of that you can issue a command multiple times via the REPL, so you can step debug it multiple times within the same gdb session.
(Beware though that you probably need to connect again as your connection may be lost)

Example workflow

To show off how it works first with CMDaemon. My workflow is to create a unit-test that fails, set a breakpoint in the unit-test and start the debug.

→ break point set followed by the debugger stopping execution at that point.

→ step-into example, select the function to step into ➀ and click the button highlighted with ➁.

There is also the F7 key to "step into", but be prepared to step into assembly a lot of times (use CTRL+F7 to step out, and try again).
You will jump into the -> operator, shared pointer dereferences, std::string constructors, before getting into the function you want.
(Also note that the first time you step into assembly it will be very slow, but it will get faster the next few times).

Wizard example to debug cmd unit test

Note that you want to set some bogus command like whoami.
Netbeans will try to be smart and clean your project directory for you
(and rebuild without using multiple cores, ..)

Note the working directory should be including src.
This is to help gdb later with finding source code.

There is one fix needed that the Wizard didn't set properly for us.
Go to project properties, Build / Make, and set Build Result to the executable.
The remote debugger will use this value for issuing with gdb, and it's somehow empty by default.

Use ALT+SHIFT+o to Jump to the file containing the test.
Set a breakpoint there using CTRL+F8

The final thing we want to pass to gdb is the parameters for running our specific unittest.
In my example "${OUTPUT_PATH}" --unittests --gtest_filter=LineParserTest.empty.

Launch of yet another blog iiCommon Lisp wallpaperImproving the outline for the Adornment of the Middle WayUsing allegro with wxWidgetsLaunch of yet another blogMotion blurFunctional programmingEnable wake-on-lan on Linux Debian (4.0)