How much memory is enough? It seems that you should have about as much memory as you have storage space available on the system. Or maybe not?.

So I Kept a very long and detailed presentation about modern operating system memory management. It seems that there are still way too many engineers, who simply do not understand how memory is managed. They though that because all server memory is being used, they should always add more memory to server. They kept wondering how all memory can be used even if server got 128 gigs of ram. Well well, of course it's used, as disk cache. It would be simply silly, not to use all available memory. I wonder how it's possible that people don't get this in 2014, because OS/2 was doing all the same stuff in 1987, so it shouldn't be any kind of news. Another silly and absolutely incorrect thought is that swap usage would indicate that system is running out of memory. No it isn't. Simply the memory pages which aren't used, are moved to the swap and newly available RAM is used for something more useful, like disk cache. This is really simple concept and everything is about memory optimization. But it seems that (early) 1980s thinking is stuck hard with some Administrators.

People also complain that system is slow, when they start using it (UI), after it hasn't been used for a week or so. Of course it is, UI should have be swapped out, because it hasn't been needed for one week. There is more important stuff to use memory for. Btw. OS/2 Warp did this beautifully too.

Planned to make Windows XP installation image with Finnish and English languages, and all possible updates pre-installed. Using WSUS and WUD. I'm kind of hoping, that this tool won't be ever needed. But as we all know, it's practically needed sooner or later. I just wonder what will happen to XP activation service, does it start to accept any license key, or reject every key? If every key is rejected, it might be a problem. But I assume nobody's going to sue me, if I use cracked Windows XP, because I still got valid licenses for every system. Just the activation part (and Genuine Advantage, lol) is skipped.

Great example how much little tinkering with SQL queries can actually affect the performance. I'm sure everyone here got similar experiences, so practically this shouldn't be any news to anyone of us. And additional article about SQL query planner. Of course little data denormalization (materlialized views) can lead to even bigger performance gains very easily.

Experimented with Microsoft Azure. Works well. I just don't like the 'Windows license tax', because Linux instances are much cheaper to run than Windows instances on Azure. Storage I/O performance wasn't great either, except on cache device. One interesting thing to notice was also the fact, that connections to Azure North Europe are about 15 ms slower, than connections to Azure West Europe. I thought that Finland is Northern Europe. Anyway, Northern Europe in this case means London region in UK and Western Europe means Amsterdam region in Netherlands. I'm still curious why they won't provide easy RDS SALs via the Azure portal. I'm looking forward into this matter. Also one thing to consider depending on the application being server is that the round trip latency is about 40-60 ms higher than if services would be hosted in Finland.

Some stuff about Azure:

I'm lookin for DaaS (Desktop as a Service) multi-tenant RDS solution (remote Desktop Serices Session Host), not multiple VDI hosts with single user / Windows installation, which wastes a ton of resources. I would say that VDI solution uses in our case about 10x more resources, compared to RDS solution. So from economic point of view, VDI is absolutely out of the question.

I just dislike the fact that Microsoft Azure doesn't offer RDS SAL licensed directly, and forces to work with slow and complex SPLA deals. I'm currently acquiring RDS CALs via SPLA distributor, and I don't like the process at all. It has been getting better, but it's still complicated, slow and error prone.

What I would like to see, is one slider in Azure, where I can just select, that this server should support 100 concurrent RDS users. Also licensing models / user / device, are really out dated for cloud environments. It should be N concurrent users, not these pre specified devices or users, which only complicates the process in environments where users come and go as well as devices are replaced all the time.

There are a few 3rd party applications, which nicely allow to circumvent these Microsoft restrictions. Of course it means breaking the license terms, but it's a lot easier and also very much cheaper alternative. I have also been exploring those just out of curiosity. Unfortunately for Microsoft these solutions seem to work very well, and as mentioned, economic impact is huge. As long as nobody knows that these products are used, it's a great option. And end users do not need to know, how the RDS services are produced.

That's also one of the reasons I would like to get the licenses directly from Microsoft, so it would be cheap, and simple. But now they're just over complicating this thing.

Anyway. I'm still asking if anyone got any practical experience, how much users are impacted by the additional 50ms latency. And if not, then I'll simply have to launch a few test servers in London to see what the practical impact is. That can be easily arranged.

If I wouldn't be so unhappy with the current RDS CALs SPLA licensing model, I wouldn't actually be considering Azure at all. I just were hoping, that it would provide better service / license integration. Because on every other aspect, we're very happy with our current service provider, which also provides under 1ms round trip latencies for us.

- Thank you

It's great that someone is willing to publish results about hard drive reliability. Backblaze - What hard drive should I buy? Without hard data, these discussions are always endless. I have one drive that has been working for 10 years, and then I had one drive that failed in three months. Statistically absolutely meaningless conversation.

Checked out new stuff in Linux 3.13 kernel:

1.2. nftables, the successor of iptables

This is nice and interesting, new packet filteration and processing layer. I'll need to install it to one of test servers.

This is something I personally find very interesting. Except similar things have been done earlier without any kernel support. Have to check separately if this got any practical meaning.

6. Btrfs commit mount option

Now it's finally there. I was wondering why it disappeared with btrfs when ext4 got it. I have been utilizing very high commit times with ext4 and on temp disks disabled journaling and barriers completely. If system crashes badly with that configuration, it's just best to format whole temp partition and restart the task.

Aww. It seems that I managed to block about only one weeks worth of stuff. My backlog is growing again at alarming rate. But that's all for now folks.

Made with the new Google Sites, an effortless way to create beautiful sites.