Quicksearch

Disclaimer

The individual owning this blog works for Oracle in Germany. The opinions expressed here are his own, are not necessarily reviewed in advance by anyone but the individual author, and neither Oracle nor any other party necessarily agrees with them.

Navigation

Tuesday, May 2. 2017

Okay, here it is , the last article in this blog. As I made an mistake when looking at the configuration of my Feedburner RSS feed, it gone public a little bit early, however here it is: The new blog is at blog.moellenkamp.org, a short article about the new start can be found here.

Monday, May 1. 2017

From time to time it is necessary to think about all the stuff you carry around with you that holds you back or is too freighted. I decided this weekend to stop c0t0d0s0.org. To be exact ... i'm thinking about it for a while, but in the last few days i made up mind about it. There were few new articles in the last years anyway. So it's probably not that much of a change. But the active decision to stop c0t0d0s0.org was long overdue.

So I decided to put the whole site in the "archive" mode. Even without new articles there is still a lot of traffic on the content and i don't want it to take it away.

However i decided to start something new as well. This is not the end of me writing things. There will be a follow-on to my old blog, almost from scratch. I still want and need a place to share things like my findings about performance problems at customers or when i find a interesting capability of Solaris. However the new site will cover my much broadened interests as well. It's a fresh start.

On this blog will be only one new article after this one explaining where you can find my new site, perhaps afters this just articles that points to corresponding articles in the new blog, but i'm not sure about that. I will redirect feeds and stuff like that to the new blog as soon as the new site is ready. So no action from your site nescessary.

Wednesday, April 26. 2017

Sometimes some simple question leads you down deep to basic discussions: This was something I had to solve 2 years ago. The hard part was not finding it out, that was pretty obvious at start. The problem was to it was to explain it.

Since then I saw it reoccurring several times. Once occasion two weeks ago, thus I decided to write this down. I will just take the question of the customer „I had a SAN and my tar -x was running fine with a short runtime. Now you gave me a ZFSSA, I’m using NFS and the times for tar -x are horrible. 61 Minutes “. I’m writing about this now, because in different appearances I’ve seen this problem again and again over the last few months and I think it's time to write an article where I can just point when I’m seeing this problem again.

To start with it: This issue is not Solaris specific ... this more or less a NFS specific problem. So maybe it's interesting to users of others operating systems as well.

Sunday, March 5. 2017

Recently I had a performance tuning gig at a customer reporting that despite having the same number of vCPUs configured into the Logical Domain, the performance of both systems was different. A further observation was that a significant number of CPUs weren’t used on that system.

Sunday, March 5. 2017

I just want to share something that I've learned a few days ago by doing this. Let's assume you have a ZFS pool. 1 TB in size. You want to add some storage. By accident you grab a slice 0 that is 128MB large instead of the whole LUN. The obvious question is how do you get rid of it. You may get to the idea that you replace the 128 MB LUN with a 1 TB LUN. We do this replacement all the time to increase of rpools. However: For the given situation this is an exceptionally bad idea.

We have 1.4 billion devices out there running Android all running the same OS. We have roughly 1 billion iOS devices all running a different OS, but all running the same different OS. Both with significant compute power per device. We have 10 million Raspberry Pi out there. The variant 3 is quite powerful. I don't know how many routers, TV sets, fridges, running a modified general-purpose OS.

You could joke that we are just one really bad zero-day and enough criminal energy away from a Terminator 3-style scenario. The bright side: We are still protected by a lack of updates in technology from the full T3 scenario as described in "CNN: The U.S. is still using floppy disks to run its nuclear program" . By the way, this is the only case where i 'm against updates. This code and hardware did not killed us for the past decades. Keep it this way. Still bad enough: Given failing TVs, lack of Netflix and porn , no smartphone to stare at and the necessity to talk with each other ... people will go on a rampage anyway and behave like post nuclear armageddon zombies.

Wednesday, September 28. 2016

One of the usual lines in customers /etc/system is the line to limit the the size of the ARC. For a long time you used the zfs_arc_limit parameter for doing so. However with Solaris 11.2 there is a new parameter. It’s named user_reserve_hint_pct. It’s currently the suggested way to limit the ARC. However it works different than the old parameter. I want to shed some light on this in this blog entry.

Thursday, July 21. 2016

I'm playing at the moment a little bit with SDR. One of the nice things is that you can use it to receive ASD-B information. This is a system that send (beside other information) the position and altitude of an aircraft. This capability is part of future traffic control systems, just in case you are wondering why aircrafts are doing this. However: The information is unencrypted and anybody can receive it. You just need a RasPi, an antenna and a DVB-T stick with a certain chipset and a program called dump1090.

However i thought "Joerg, you have access to the raw data of your own receiver. Let's play with that". And this is the result. It's 24h worth of air traffic from the 19.7.2016 17:00 to the 21.7.2016 17:00. You can click on it to get the fullsize result.

Tuesday, May 31. 2016

Last night the 3rd update to the Solaris 11.3 cheatsheet went online. I've added live migration for kernel zones, live reconfiguration for kernel zones and non-global zones and a part about dissecting the version numbers of Solaris 11 pacakages. It's available at the usual location.

Friday, May 20. 2016

Some typos fixed, some clarifications included, some content added, a layout quirk removed. There is a new version of the Solaris 11.3 cheatsheet. You will find the new cheatsheet (version of 19.05.16 22:08 when you look at the last page) at the known location

PS: I just saw i forgot to update the contributions list ... dang .... will be in the next update.

Tuesday, April 19. 2016

As i get some questions about it in recent times: With a normal SATA/SAS disk you can simply unplug a disk without warning and preparation. NVMe are different. For all practical purposes they are PCI devices. And they want to be handled as such. Unplugging a NVMe drive without using the proper procedure is pretty much like yanking out a PCI card without preparation. So when you want to remove a NVMe disk you have to use the hotplug command in Solaris. To cite the documentation: