So latest news comes that Linus has switched to KDE. This apparently after first switching to XFCE, then I guess back to GNOME. Hmmm.

I’m still on XFCE. Can’t be bothered to try anything else. Yes XFCE is somewhat sucky, but once you fix its stupidities (such as the filemanager taking a minute to start up due to some vfs snafu that’s been apparently around forever), it’s there. I’ve entertained the thought of trying something else, but it’s not an exciting enough proposition.

Now I am wondering what to do once Fedora 16 stops being supported. Should I spend the afternoon upgrading to 18? The issue is that I can’t do the normal upgrade thing since that would boot into it’s own environment and would not load a necessary module that I do on startup that turns off the bad nvidia card with a screwed up heatsink. It’s impossible to do this in BIOS (stupid stupid Lenovo, never buying another Lenovo again). Anyway, that means having to do it right after boot, but before the GUI comes up since that would (even if using the intel card) turn the laptop into a portable oven, and it will just turn off and die nowdays. I am thinking that maybe if the upgrade happens during the wintertime, I could just stick the laptop on snow (and wait till it’s at least 20 below freezing) and then it could stay sane for the duration of the upgrade perhaps. I will probably try to do the upgrade by yum only, but that seems like it could be bug prone and would require some manual tinkering, and I just don’t care enough to do that.

Next time picking a distro I’m going with something LTS I think. And … Get off my lawn!!!

I needed a way to visualize which t get hit for a polynomial such as when z ranges in a simple set such as a square or a circle. That is, really this is a generically two-valued function above the z plane. Of course we can’t just graph it since we don’t have 4 real dimensions (I want t and z to of course be complex). For each complex z, there are generically two complex t above it.

So instead of looking for existing solutions (boring, surely there is a much more refined tool out there) I decided it is the perfect time to learn a bit of Python and checkout how it does math. Surprisingly well it turns out. Look at the code yourself. You will need numpy, cairo, and pygobject. I think except for numpy everything was installed on fedora. To change the polynomial or drawing parameters you need to change the code. It’s not really documented, but it should not be too hard to find where to change things. It’s less than 150 lines long, and you should take into considerations that I’ve never before written a line of python code, so there might be some things which are ugly. I did have the advantage of knowing GTK, though I never used Cairo before and I only vaguely knew how it works. It’s probably an hour or two’s worth coding, the rest of yesterday afternoon was spent on playing around with different polynomials.

What it does is randomly pick z points in a rectangle, by default real and imagnary parts going from -1 to 1. Each z point has a certain color assigned. On the left hand side of the plot you can see the points picked along with their colors. Then it solves the polynomial and plots the two (or more if the polynomial of higher degree) solutions on the right with those colors. It uses the alpha channel on the right so that you get an idea of how often a certain point is picked. Anyway, here is the resulting plot for the polynomial given above:

I am glad to report (or not glad, depending on your point of view) to report that using the code I did find a counterexample to a Lemma I was trying to prove. In fact the counterexample is essentially the polynomial above. That is, I was thinking you’d probably have to have hit every t inside the “outline” of the image if all the roots were 0 at zero. It turns this is not true. In fact there exist polynomials where t points arbitrarily close to zero are not hit even if the outline is pretty big (actually the hypothesis in the lemma were more complicated, but no point in stating them since it’s not true). For example, doesn’t hit a whole neighbourhood of the point . Below is the plot for . Note that as n goes to infinity the singularity gets close to which is the union of two complex lines.

By the way, be prepared the program eats up quite a bit of ram, it’s very inefficient in what it does, so don’t run it on a very old machine. It will stop plotting points after a while so that it doesn’t bring your machine to its knees if you happen to forget to hit “Stop”. Also it does points in large “bursts” instead of one by one.

Update: I realized after I wrote above that I never wrote a line of python code that I did write a line of python code before. In my evince/vim/synctex setup I did fiddle with some python code that I stole from gedit, but I didn’t really write any new code there rather than just whacking some old code I did not totally understand with a hammer till it fit in the hole that I needed (a round peg will go into a square hole if hit hard enough).

So … apparently searching an unordered list without any structure whatsoever is supposed to be better than having structure. At least that’s the new GNOME shell design that removes categories, removes any ordering and places icons in pages. The arguments are that it’s hard to categorize things and people use spatial memory to find where things are.

The spatial memory was here before with nautilus. It didn’t work out so great. No people don’t have spatial memory. For example for me, I use a small number of applications often, I put their launchers somewhere easy to reach. The rest of the applications I use rarely if never. No I do not remember where they are, I do not even remember what they are named. E.g. I don’t remember what the ftp client list, but I am not a total moron and I correctly guess to look for it in the “Internet” menu which is managable. Given I’ve used ftp probably once in a year, I do not remember where it is. Another example is when Maia (6 year old) needs a game to play. I never play games, but I have a few installed for these occasions. Do I want to look through an unordered list of 50-100 icons? Hell no. I want to click on “Games” and pick one. 95% or so of applications i have installed I use rarely. I will not “remember” where they are. I don’t want to spend hours trying to sort or organize the list of icons. Isn’t that what the computer can do for me? Vast majority of people (non-geeks) never change their default config, they use it as it came. So they will not organize it unless the computer organizes it for them. I have an android tablet, and this paged interface with icons you have to somehow organize yourself is totally annoying. One of the reasons why I find the tablet unusable (I don’t think I’ve turned it on for a few months now). That interface might work well when you have 10 apps, but it fails miserably when you have 100.

If I could remember that games are on page 4 (after presumably I’ve made a lot of unneeded effort to put them there) I can remember they are in the “Games” category. Actually there I don’t have to memorize it. Why don’t we just number all the buttons in an application since the user could remember what button number 4 that’s right next to button number 3 on window number 5 does. I mean, the user can use spatial memory right?

Now as for “that’s why there is search” … yeah but that only works when you know what you are searching for. I usually know what I am searching for once I found it. It’s this idea that google is the best interface for everything. Google is useful for the web because there are waaaaay too many pages to categorize. That’s not a problem for applications. Search is a compromise. It is a way to find things when there are too many to organize.

The argument “some apps don’t fit into one category neatly” also fails. The whole idea of the vfolder menus was that you could have arbitrary queries for submenus. You can have an app appear in every category where it makes sense. Now just because people making up the menus didn’t get it just right doesn’t make it a bad idea. Also now this leads to a lot of apps without any categories. The problem I think is with the original terminology. When I was designing this system I used “Keywords” instead of “Categories”. But KDE already had Keywords, so we used Categories, but you should think of them as Keywords on which to query where the icon appears. It describes the application, it doesn’t hardcode where it appears. Unfortunately, there seems to be a lack of understanding of this concept which always led to miscategorization. For example someone changed the original design to say some things were some sort of “core categories” or whatnot and that only one should appear on an icon and that there will be a menu with that name. That defeats the purpose. It’s like beating out the front glass of your car and then complaining about the wind.

Finally, what if I lend my computer to someone to do something quickly. No I am a normal person, so I don’t create a new account. And even if I do create a new account, the default sorting of apps is unlikely to be helpful. If someone just wants to quickly do something that doesn’t involve the icons on the dash, they’re out of luck if I have lots of apps installed. Plus at work I will have a different UI, on my laptop I have a different UI, and any other computer I use will have a different UI. I can’t customize everyone of them just to use them.

As it is, if I had a friend use my computer with gnome-shell they were lost. If it’s made even less usable … thank god for XFCE, though I worry that these moves towards iphonization of the UI will lead to even worse categorization. There are already many .desktop’s with badly filled out Categories field, so there will be less incentive to do it correctly.

I just feel like ranting about determinant notation. I always get in this mood when preparing a lecture on determinants and I look through various books for ideas on better presentation and the somewhat standard notation makes my skin crawl. Many people think it is a good idea to use

instead of the sane, and hardly any more verbose

or .

Now what’s the problem with the first one.

1) Unless you look carefully you might mistake the vertical lines for brackets and simply see a matrix, not its determinant.

2) vertical lines look like something positive while the determinant is negative.

3) What about 1 by 1 matrces. is a determinant of or is it the absolute value of .

4) What if you want the absolute value of the determinant (something commonly done). Then if you’d write

that looks more like the operator norm of the matrix rather than absolute value of its determinant. So in this case, even those calculus or linear algebra books that use the vertical lines will write:

So now the student might be confused because they don’t expect to see “det” used for determinant (consistency in notation is out the window).

So … if you are teaching linear algebra or writing a book on linear algebra, do the right thing: Don’t use vertical lines.

So, another GNOME UI fail. Marketa has a new computer: Using compositing leads to crashes so using fallback gnome (am thinking i should switch her to xfce as well). But this is really not a problem of the fallback.

Anyway, the UI fail I am talking about is “adding a printer”. Something which she figured out how to do previously. Not with the new UI for the printing. The thing is, the window is almost empty and it is not at all clear what to press to add a printer. So she hasn’t figured it out and I had to help out. I figured out three things

1) The “unlock” thing is totally unintuitive. She did not think of pressing it. She doesn’t want to unlock anything, she wants to add a printer. With it, some parts of the UI are greyed out, but it’s not clear what should happen.

2) There is just a “+” in a lower corner that you have to press. She did not figure out that’s what you have to press to add a printer. A button with “Add printer” would have been a million times better.

3) Not even I figured out how to set default options for the printer such as duplex, resolution, etc… Pressing “Options” is something about forbidden users or whatnot, which is a totally useless option on a laptop.

If a PhD who has used computers for years can’t figure out how to do something like this, there is a problem with the UI.

This is a symptom all over the new GNOME system settings. It’s very hard to set something up if it didn’t set itself up automatically. There’s also a lot of guesswork involved now. The UI may be slightly prettier, but it is a step backwards usage-wise.

Here’s a solution:

1) Get rid of the lock thing, go back to the model that if you do something that requires authentication, ask for authentication. Why should there be extra UI that only confuses the user.

2) Change the “+” and “-” buttons to have the actual text. “Add printer” “Remove printer”.

3) “Add printer” should be very prominent in the UI. I bet 90% of the time when a normal user enters that dialog, they want to add a printer.

4) Put options where they can be accessed. Surely the options are accessible somewhere, but I didn’t find it.

Apparently computer science is not too interesting and costs too much. $1.7 mil at University of Florida apparently. So obviously we cut it, so that the athletic department (costing $99 mil) can get an extra $2 mil a year. It’s obvious where our priorities are as a society. Even if nothing got cut, 1.7 vs 99 is pretty bad.

After a year of using GNOME-shell, I finally got fed up with it. GNOME shell is unfortunately really annoying to use. There are so many decisions it tries to do, that it does some of them wrong. New window placement, the whole status thing in the corner getting triggered when I don’t want it to, the overview getting triggered all the time by mistake, as well as for example custom launcher setup. When I run my script for editting latex it never shows evince and I have to focus it by alt-tab “by hand.” The whole Alt-Tab behaviour is totally nuts. I also really REALLY hate the fact that dialogs are now “attached” to their parents. I often need to look at the original window because I just forgot what I was going to type in, such as “how many pages did the document have again and what pag I am on now” when printing, this happens really really often for me, so gnome shell drives me up the wall. There are just so many little things like that that overall make it a total pain. Some are solved through extensions or change in behaviour, but I use several computers, so learning different behaviour just for my laptop is annoying.

Consistency be damned is the new motto now. From those new and cool interfaces, they are all quite different, Unity, Cinnamon, GNOME shell, (I haven’t tried KDE, I guess I won’t be able to go there out of GNOME loyalty, which was the only reason why I kept using GNOME shell for so long). Apparently rounded corners are more important than working correctly.

So at first I was happy with GNOME shell. Mostly because it seems to be aimed (despite what anyone says) at people who use the command line. People who mouse around will find GNOME shell annoying. For example my wife will not be searching for apps using the keyboard to launch them. Also the fact that it’s impossible to customize GNOME nowdays to a specific purpose easily (using dconf-editor which has totally broken UI, is really not an answer, I wasted lots of time trying to get some things to work). Either ues GNOME shell for what it’s specifically designed for, or use something else. So flexibility is also out the window.

GNOME shell seems to also think that your mousing is very precise, which it never was for me. I commonly press the wrong button, or the mouse will go somewhere it shouldn’t and the interface punishes you for it. See above about entering the overview by mistake (whenever I wanted to hit a menu or the back button or some such).

I tried LXDE, but it’s buggy as hell (at least in fedora). The window list seems to jump around, launchers don’t always work, the battery status doesn’t work, and workspace switcher is totally broken. OK, so no go there. I tried Cinnamon for a few days, but it’s bad in many of the ways that GNOME shell is. Unity is even worse.

I had some trouble with XFCE in the past (on ubuntu that was upgraded a few times, so it might not have been fair to xfce). Anyway, I installed it on fedora, and quickly set it up, and … it works. It’s not perfect, but I don’t need it to be perfect. I want it to just work, and so far it does. It gets out of my way, unlike GNOME shell which kept trying to get in my way. Plus it’s fast.

Two things I saw recently 1) NASA budget for climate research is 1 billion (for all those satellites and all that), 2) Facebook buys instagram for 1 billion.

Now we can see where our priorities (as a society) lie. What I don’t get is, that instagram has software that a reasonably good programmer could have done in a few weekends of binge hacking. It does nothing really new. You could even take fairly off the shelf things. Perhaps the servers and the online setup might be costlier, but still, nothing all that strange. To think that this is worth to us as much as figuring out where the next hurricane will hit, or when will the ice caps melt is “interesting”.

Though it is not totally out of sync with what else is happening. When the entire UC system which is responsible for several nobel prizes and innumerable new cures for diseases and leaps in terms of understanding the world, not to mention educating a huge number of students, when that system has a budget hole the size of one CEOs bonus, and it’s a huge hit for the university. Something is off in priorities. Actually there is a very good likelyhood that this CEO will die of some cancer that wasn’t cured because we don’t fund science enough.

So CEO salary has increased by approximately 9.7% adjusted for inflation every year between 1990-2005 [1] (that is approx 300% increase over that time, so 4 times what they had in 1990). Anyway, that has a doubling time of years. Now median CEO (among the top 200) made approximately 10 million a year in compensation in 2010 [2]. In 2009 there were about 8.3 trillion dollars in existence [3]. Anyway, approximately a CEO makes a 1 millionth of the money in the world, or in other words, if we had a million CEOs we’d exhaust our money supply. It takes about doublings to get a million. Hence in years one CEO will make all the money in the world. And this is all inflation adjusted.

But we don’t have to go so far to get into trouble. Now we did talk about the top 200, so when would the top 200 make all the money in the world. Well that requires only years so years. OK, so in less than 100 years, the top 200 CEOs will suck out all the money in the universe.

Anyway, the problem is the following: The companies are not rewarding an individual CEO for good performance. They are rewarding all future CEOs. The thing is, that there is no “starting salary.” A CEO that just started is (statistically) making about the same as the one who’s been around for quite a while. If you would start all CEOs at a base salary, then one particular CEOs salary could rise at 10% a year because he’d be with the company only a fixed number of years, the problem would be managable. Now to whatever extent there is anything like a “starting salary” the increase an individual CEO makes is even higher than 10% a year. Essentially the starting salary is increasing at 10% a year.

Let’s look at even a more realistic example of how quickly do we get into trouble. The CEO salary can easily be even 1% of the revenue for the company [4]. In fact some small private colleges are paying 1% of their budgets to their university president, a group where similar thing has happened. Well, now think about this doubling. If it is 1% now, it will be 2% in 7.5 years, 4% in 15 years, 8% in 22.5 years, 16% in 30 years, 32% in 37.5 years, 64% in 45 years, and we get 100% at less than 50 years. So in less than 50 years the entire revenue would have to support the CEO. Now you say, well but the revenue is also growing. Not so fast, the 10% pay increase is overall, that includes companies that did badly and those that did well. One would think that the growth of revenue on average (including failed companies) is not that much more than inflation. And this is adjusted for inflation. In any case CEO is definitely growing a lot faster than the economy (and hence your average revenue), and hence you hit the wall sooner or later. Even if we lob off another 2% to adjust for growth, the doubling time for CEO pay is still 9 years.

Of course a problem would appear a lot earlier than in 50 years. So it’s not only the “rich get richer” and “things are not fair” argument. This state of affairs is actually unsustainable even in relatively short period of time (within our lifetimes). I think people don’t understand that exponential growth is really really fast. That’s why pyramid schemes never work. It’s why ponzi schemes usually fail far quicker than the perpetrator hoped. 10% increase a year does not seem like much (just like 10% return on investment doesn’t seem like that terribly much).