Log in

Tue, Aug. 10th, 2010, 07:43 pm

There are a few frequency meter implementations available for Atmel's microcontroller series, but I haven't come across a reasonable reciprocal frequency counter implementation, let alone one without extra hardware.

Thus I created a software only reciprocal frequency counter running on an ATtiny2313 (ATmega not tested yet) with a usable frequency range of 0Hz..10MHz (when running at 20MHz), sub-Hz resolution, and 10ppm accuracy or better.

This requires 64bit arithmetics, for which the libgcc routines are prohibitively expensive on ATtiny. The 64bit routines in C and assembler I thus implemented for this project require much less space.

Dick Streefland has created a software based USB protocol implementation called usbtiny for the AVR attiny microcontroller family. I have stripped down the code so that I was able to add detection of a single programmable IR signal. When the signal is detected, an output pin triggers a power button press for 250ms. That way e.g. media center PCs can be switched on remotely. All this is documented on the project page.

During this project I decided to revive my passing knowledge about board layouting and etching. For layouting I have used kicad, IMHO the first open source layouting software that is actually usable. The result looks pretty good, the 8/10 mil raster shows excellent sharpness and only very little undercut. Especially considering that the material and chemicals have been laying around here unused for - what? - 20 years...

I have just published my RAnsrID git repository on gitorious.org. Beginning now I will stay backward compatible with old versions of journal and disk meta structure blocks. Get the git repo fromgit clone git://gitorious.org/ransrid/ransrid.git

Unfortunately, there is little (read: no) documentation available yet; that will change after LinuxTag. Upto then the only doc is the heavily commented source code. Grab it, study it, enhance it, send a patch - that's the open source way.

For LinuxTag I have another goodie - I will be traveling with four USB disks and give a short live demo of what the system is already capable of. Live add and removal of disks isn't working yet, but reading, writing, validation, and rebuilding is.

Note that nbd used to freeze machines during writes if client and server were running on the same machine. Since kernel 2.6.26 there is a patch included that ought to fix this issue, but there were some (inconclusive?) discussions about this patch beforehand. Using xen for the client seems to work around this issue as well, though.

In my spare time I've been working on a RAID-lookalike system for storing large amounts of data with multiple redundancies - and with significantly lower power consumption and disk spinning time than standard RAID if you only access single large files in a typical session.

The whole thing is implemented as a network block device (nbd), and will be presented (in an early, but at least already partially working state) on LinuxTag 2010 in Berlin.

Note that this is not a direct competitor to a standard RAID solution - in fact, I propose using a RAID 1 for the journal it needs (e.g. use the system disk - you're already using a RAID there, right?). For a comparison table check the project page.

Source will be available soon, I've not decided which git hoster to use yet. I don't think it's reasonable to put this on freedesktop, because is relation to freedesktop to close to nothing. I might change my mind, though .

If you can film in 3D with a stereo camera, do it! It's so much better than post-processed 3D. The 3D effect in Alice in Wonderland always felt a bit out of place. And I'm not talking about composition, but rather about the tiny depth differences in details you probably don't get exactly right when doing post processing. And Burton probably was more careful than others will be in what looks like studio-driven films.

Ok, acknowledged, the comparison is everything but fair. Avatar had a much higher budget, which shows. It created probably the single least intrusive 3D experience for me so far. For purely animated films 3D is an easier task, and there Bolt felt almost as good as Avatar, Up a bit worse, but still so much better than AiW.

Many films still fall in the old pit falls like trying to shock / "interest" the audience with cheap sticking-out-of-the-frame tricks (My Bloody Valentine as a very negative example). Which bores at best, but certainly draws you away from the film. If it's done just for fun at the beginning of the film like in Monsters vs. Aliens, all right, but please behave. In Avatar the 3D effect was never annoying, it just drew you into the film and let you forget that you're watching a 3D film. It just fit.

Ah, and before I forget it:The single most important feature of movies filmed in 3D is not the 3D effect. It's the fact that due to 3D directors and camera men finally have to think of good camera paths and slow cutting pace again. No chance to create "dramatic" effects with wacky camera and half second cuts, you have to have actually good choreography in action scenes. Otherwise the audience will... I'll spare you the gory details

Note that I still believe that the film content is way more important than the style. Still, we're talking about moving pictures, they ought to be pretty...

I had been nominated to candidate for X.org's Board of Directors this year (actually two years, because members are elected for a two years term each) - and was actually voted for and elected.

During this year's elections a number of questions came up about several issues, partly regarding the financial situation of the foundation, partly about how the board members communicate with each other and the regular members. It basically all boiled down to the number one perceived issue with the X.org board:

It's transparency. Or rather the lack thereof.

It's generally accepted, that even some of the actions required by the By-Laws (like meeting minutes) have been somewhat neglected. As a result of the discussions, Eric Anholt has now published the irclogs on members.x.org, thanks for that! Also the irc channel for the regular board meetings (#xf-bod on irc.oftc.net) has probably not been advertised enough since its opening to the public. It is also safe to assume that this hasn't been done by intention, but just by lack of time - the daily schedule of most open source people is extremely cluttered (geez, when did I last blog?!?).

I want to promise that I will try my very best to push for transparency as much as possible, maybe starting by taking/polishing minutes after the next (my first) irc meeting.

I'm quite exited about the days to come .And that is a good thing, because I'm pretty sure it will be - say - a little bit less thrilling after a while... as with all good things .

It has been about half a year since the last release, but finally, over a hundred git commits later, we have version 1.3.0 of the radeonhd driver.

You may think that a release "cycle" of 6 months is... not that much. However, as most open source projects radeonhd is pretty much understaffed. Together with lots of additional work on Novell's side (which of course reduces the amount of time Egbert and I can spend on radeonhd) it took us a while to finally find some time for polishing. Because 2D acceleration is active by default now on (almost) all chipsets, we were seeing more regressions than usual.

Never mind, you're probably more interested about the new release. These are the main changes:

I've added support for PowerPlayInfo_V4 (the one I reverse engineered lately) in radeonhd today - and a whole bunch of heuristic validation routines. Which means that the driver finally has some idea about the possible chip frequencies and voltages .Determination of some of the values is still scary, and I don't know yet whether we will stumble over them - especially the minimum frequencies are sort-of guessed from the (known) minimum PLL output frequencies. But we need them, because some cards (e.g. one RV770 I have here) only have one known good memory clock configuration, while it can save tons of power if it is lowered (tested + measured, GDDR5 memory really uses a lot of power). Therefor, the values are checked for reasonable bounds, and rejected if they exceed. So if a future chip needs less than 500mV for operation, we will have to update these tests...

The current code has all these bits in place, but doesn't configure anything different than previous versions. First, a set of reasonable settings has to be determined, which will need additional heuristics (this is done in rhdPmSelectSettings(), which needs some love still). This selection also depends on how difficult it will be to change the engine and memory clock, and the VDDC voltage. So far we only set the engine clock, but code is already in place for the other two.Before setting a clock, it must be made sure that the engines using this clock are idle - that's why we (ahem!) only set the engine clock once at the beginning so far:

For setting the engine clock, the engine must be idle (surprise).

For setting the memory clock, memory accesses must be prohibited. Which means that the screen will be blank during this phase. It remains to be seen whether the vertical blank is long enough to do this on the fly. Otherwise this means that we typically shouldn't change the memory clock during runtime. Things get messy when multiple screens are attached...

For setting the VDDC voltage, all engines must be idle. Also, certain combinations of engine and memory clock require certain VDDC voltages.

The amount of known good configurations is limited, and sometimes they are contradicting . Remains to be seen whether we can sort-of interpolate between good settings, because typically there are more voltage values available than used...

Later we will need an oracle for selecting the right power state according to the current usage pattern. It might be easier to do that in kernel space, but this remains to be seen.

That's it for today, I actually hoped to get more accomplished during our HackWeek. But reading out dynamic AtomBIOS tables can be... intriguingly complex.

I finally spent a few spare minutes (underestimation of the week) to sort-of reverse engineer the PowerPlayInfo tables of newer ATI cards - and somewhat succeeded. But the information I found so far is not as encouraging as I'd like it to be.Basically, you get a list of potential combinations of engine clock, memory clock, and core voltage, plus a number of unknown flags. So far so good, but on some (especially high class) cards the entries do not vary as much as I'd like them to do, and many combinations do not exactly make sense. Others are repeated over and over.