On the morning of the 20th of January, my desktop decided to die for no obvious reason at all.

Unlike the Windows 10-based laptop that got bricked only about a couple of weeks before, Nanacore simply decided to get stuck in an eternal reboot loop switching back and forth between the updated BIOS (read: UEFI) and the backup BIOS (UEFI) images. The backup BIOS wouldn’t proceed any further than the POST splash screen because of a known compatibility issue with NVIDIA’s more recent cards (GeForce 9XX and above I believe), which is the very reason why I had to update the main BIOS last year when I replaced my old potato card. I spent about two hours removing the GPU, storage devices, and memory modules trying to rule out any potentially faulty peripherals, to no avail. Once the GPU was out of the way, the backup BIOS (UEFI) would attempt to display an error message explaining why it had kicked in, but the attempt would make it immediately crash and reboot — hence I decided “eternal” was an appropriate qualifier for the situation. I reconnected everything that there was to reconnect other than the main motherboard power plug, which was more or less glued to it. I figured trying to pry it off with more force would risk breaking the board, and it looked completely fine to me anyway. There weren’t any burn marks, bent or popped capacitors, or any other visible signs of physical damage whatsoever.

So that’s one chapter of my computing life that has now come to an abrupt and tragic end at the beginning of what’s already proving to be a quite difficult year for me, for reasons I will not discuss here.

R.I.P. Nanacore (October 25th 2012–January 20th 2017)

Suffice to say, I had to spend approximately one week with Reicore, who... isn’t really in the best shape anymore, with some keys falling off, heat issues, and insufficient processing power for pretty much any software or website written in the past 5 years. At least I was pleasantly surprised by the fact that its old Intel GM45 graphics processor can somehow drive two 1920x1080 screens without dying — one on HDMI and the other on VGA, though, because the laptop doesn’t have any other display connectors.

Fortunately, I got someone to help me cover the full costs of purchasing a new desktop, and I was also able to keep my old GPU and disk drives. This latter point leads me to suspect that it wasn’t the PSU failing (thankfully) but rather the motherboard and/or CPU dying from old age. Seeing as how this has been an extremely hot summer so far, and I had been dealing with computer cooling issues all throughout the second half of 2016 because I couldn’t make up my mind about purchasing a new case and CPU+motherboard combo, it was all a foregone conclusion. There is a lesson to be learned from this debacle, I am fairly sure.

I had been contemplating building Hanacore (“flower”, following the Japanese-based naming convention I adopted for Reicore onwards) around October last year, but I postponed making a decision until Q2 2017. The reason for this is that I expected Intel’s Kaby Lake desktop processors to begin shipping worldwide around that time, based on what little I could remember from the Ivy Bridge release cycle. However, it turns out that Intel released them on January 3rd instead, so by the time Nana died, a few Intel Core i7-7700Ks were already in stock here at local retailers in Chile. Yep, only the super-expensive unlocked CPUs. For whatever reason, retailers here don’t seem to ever have the locked counterparts in stock for very long anymore, starting with the Haswell line-up. But whatever, it’s not my money I was about to spend, so I thought I might as well take a YOLO approach for once. The alternative would’ve been to spend months tweeting at less than 20 frames per second. Or spend my own money on a laptop that wouldn’t satisfy all my requirements.

In particular, ever since I switched to a dual-screen setup at the start of December last year, I don’t think I could ever go back to working on a single screen. Most laptops these days seem to have but a single HDMI port. Not to mention that mobile GPUs are obviously inferior to desktops — watching my hard-earned GTX 1060 gathering dirt on a shelf for months would have been devastating for my morale.

Name

Year

EOL

CPU

RAM

HDD

GPU

OS

Hanacore

2017

―

Intel Core i7-7700K4.2 GHz HT quad core(Kaby Lake)

32 GiB

512 GB SSD3 TB HDD2 TB HDD

NVIDIA GeForce GTX 1060 6 GiB

Debian testing 2017-01-28 (Stretch)Windows 10 Home

Nanacore

2012

2017

Intel Core i7-37703.4 GHz HT quad core(Ivy Bridge)

16 GiB

512 GB SSD2 TB HDD

NVIDIA GeForce GTX 1060 6 GiB

Debian testing 2017-01-19 (Stretch)Windows 10 Home

Derpycore

???

2017

Intel Core i7-3537U2.0 GHz HT dual core(Ivy Bridge)

4 GiB

1 TB SSHD

NVIDIA GeForce GT 735M

Windows 10 Home Single Language

Reicore

2010

―

Intel Pentium T43002.1 GHz dual core(Penryn)

4 GiB

500 GB

Intel GM45

Debian testing (Wheezy)

Bluecore

2008

―

AMD Athlon 64 X2 QL-622 GHz dual core

4 GiB

250 GB

ATI Radeon HD 3200

Debian testing 2012-10-22 (Wheezy)Windows Vista SP1

Greycore

2007

2008

AMD Turion 64 MK-382 GHz

??? GiB

80 GB

ATI Radeon Xpress 1100

Debian GNU/Linux 5.0 (Lenny)Windows Vista

Blackcore

2006

2016

Intel Pentium 42.6 GHz HT(Prescott)

1 GiB

160 GB

VIA garbage

Windows XP SP3Debian GNU/Linux 6.0 (Squeeze)

Unnamed Desktop #2

2002

2006

Intel Celeron1.3 GHz(‘Celeron-S’)

256 MiB

40 GB

Intel 810E

openSUSE 10.0Windows XP SP2

Unnamed Desktop #1

1997

2016

Intel Pentium w/MMX166 MHz(P55C)

32 MiB

1.2 GB

S3 Trio64V+

Windows 95 OSR 2.0

Anyway, long story short, I picked my new desktop from the store on the 28th, it’s been working well so far, and it’s great and this time I picked all components other than the CPU cooler by myself.

The motherboard is an MSI Z170A Gaming M7 — a Skylake chipset, which means it required a firmware update (done during assembly at the store) to be able to drive the Kaby Lake CPU. There weren’t any Kaby Lake motherboards available with onboard sound controllers that were known to work correctly on Linux, unfortunately, and from a quick glance at Wikipedia I couldn’t really find any relevant differences between the Z170 and Z270 other than increased PCI Express bandwidth for use by M.2 SSDs (which I don’t have) and Intel’s proprietary alternative (which I don’t intend to have either). But most importantly, I wanted to avoid the sound controller fiasco I had with Nana where its CA0132 controller never had proper Linux drivers published by Creative — only a very locked-down version for a Chromebook laptop with pin and DSP effect configurations that wouldn’t work correctly with anything else. With that out of the way, I am now able to use any speakers and headphones I want without worrying about them not being recognized or not getting amplified. I’m really looking forward to replacing my 10 years old speakers.

Other than a couple of isolated USB-related incidents that I couldn’t properly attribute to anything (and an apparent compatibility issue between a 3G modem driver and NVIDIA’s stack of all things) I’ve not had any problems with Debian stretch and Linux 4.9 thus far. I also managed to reinstall Windows 10 clean a couple of nights ago. The old 2 TB HDD now holds local backups which I intend to generate in a more frequent, automated basis (compared to my more-or-less bi-weekly external backups); the new 3 TB HDD holds my Linux home dir and Windows install, and the old SSD continues to hold the root filesystem and games and some parts of my user profile (~/.config, ~/.local, and ~/.wine).

By the way, I didn’t even need to reinstall anything at all with Debian — I just configured it to use a stock 4.9 kernel instead of the custom configuration I used for Nana. On the Windows side of things, there were some complications because of a thing I did two days before Nana died that rendered it unbootable, but once I got it sorted out, it would strangely boot just fine. I still decided to do a clean install just to safely purge all of Nana’s system-specific drivers (chipset, network, etc.) and start fresh with a larger partition on the new 3 TB drive — around 400 GB instead of 96 GB.

Things have been running so smoothly thus far, in fact, that the CPU averages 35 °C while idle in summer, whereas Nana would average around 46 °C (38 °C in winter). The case and cooler fans produce far less noise than Nana’s too, even under heavy workloads. That said, the comparison might be a little unfair — Nana’s CPU was cooled by a stock fan, and Hana’s is handled by an all-in-one water cooler. Note that I haven’t tried overclocking yet, and in all likelihood, never will.

So yeah. New hardware. Cool.

Oh, and Derpycore from the table above is the bricked Windows 10 (originally 8) laptop I alluded to at the beginning of this post. It was a Sony Vaio that belonged to someone else (who wasn’t using it) which I borrowed last year when I needed a mobile computing device, since Reicore hasn’t been fit for that purpose for about 4 years with its decayed battery and broken and glitchy keyboard and touchpad. After months of semi-regular use for Windows-specific tasks for which a VM wouldn’t suffice, one day last month it just decided to never power up again. No beeping, no blinking LEDs, nada. Kind of like Nanacore afterwards, it died in its sleep after a whole session without issues, the sole difference being that Derpy wouldn’t even try to wake up ever again. Just silent icy death. I figure that’s what i get for not giving it a more adequate name. I should probably make sure to give my next laptop a regal-sounding name so it will last me forever.

EDIT:

Blackcore died last year due to a capacitor malfunction when I was inspecting it for donating it to somebody else. Probably. I am not really sure what happened there, but after several attempts I at least managed to boot it up long enough to clone its (old-style PATA) hard disk and grab what little of value I could find in it. Afterwards it would power up with its fans at full speed and never reach POST. That makes three computer deaths in under 12 months.

Finally, Unnamed Desktop #1 was last spotted outside my house circa August last year in a really poor state, accumulating rust and moss, so I’m fairly positive it’s now legally dead. I feel its loss more than Nanacore, strangely enough. It’s probably because it was my first proper computer ever. Does it count as a fourth death? I’m not sure.

After years and years of development, drama, script rewrites, field research, technological advancements, budget cuts, and temporal shenanigans, today, March 5th 2013, I can say for sure that After the Storm is complete with the release of the most important milestone yet: version 0.9.0, with all three episodes completed with 13 scenarios each.

A few caveats for people upgrading from the previous release:

This release adds the final Epilogue scenario for Episode III, which will become a bonus feature in 0.9.1. If you had previously finished AtS Episode III using versions 0.8.90 or 0.8.90.1, you will have a start-of-scenario save for the Epilogue scenario which you can use after upgrading to this version.

As usual, for the most stable experience I advise using Wesnoth 1.10.x — preferably 1.10.5 or a newer version when it becomes available. All episodes of this campaign were primarily developed and tested on 1.10.x, and there are subtle behavioral differences in the game engine between 1.10.x and 1.11.x that may break some sequences or cause other unintended side-effects.

Various issues reported by playtesters on Wesnoth 1.11.1 were fixed. Most notably, it implements a workaround measure for mainline bug #20373, which is relevant for Episode III scenarios starting from Dark Sea. People who experienced player information loss (recall and recruit lists, gold reserves) after Dark Sea on 1.11.1 will need to replay that scenario from the start-of-scenario save (NOT the Turn 1 save!) in order for Wesnoth to install the code in charge of solving that issue in later scenarios. This code will not work on Wesnoth 1.11.2 — you will need to finish Episode III on 1.11.1 before switching to 1.11.2 (whenever it is released, anyway).

This... has been a really long journey, to say the least, and I pretty much lost all hope of ever finishing this campaign at various points over past years. Development started in 2008 and quickly stagnated for various reasons:

Perceived lack/loss of interest from the audience

Excessive perfectionism on my part

Various IRL struggles, including health and personal matters

Constant conflicts of interest amongst the few people who were actually interested in IftU and AtS’ development

Mainline development tasks taking up my spare time

Wesnoth.org forums moderation and administration taking up my spare time

To say that I was overjoyed when the Big Mergetook place just a couple of weeks ago would be a big understatement. This campaign became for me more than just another Wesnoth campaign as time passed — it became a part of me I thought I had left behind when IftU was first completed, a testament to my chronic failure to drive my own projects to completion.

Development Hell

After the Storm changed a lot since it was originally conceived in 2008. The original draft was both over-pretentious and subpar, and it was not what I wanted to create after IftU. I wanted to create something better than IftU, but I locked myself in a trap by relying on source material that was already broken by design. Making a better sequel became my obsession, and that obsession led to AtS’ stagnation during the development of Episode I.

But some time mid-2011, I finally saw that trying to achieve perfection was a flawed goal in its own right. What I should have been aiming for all along was to make something fun, something from which I could learn, something I would enjoy to play and create. It was that realization that finally led me to complete Episode I, and the rest was a blaze; a blaze that culminates with this release, today.

The result

The final product is neither perfect nor it aims to be such. I do not think this campaign is for everyone, seeing as how the gameplay and plot are very tightly knit together, and the overall scenario count goes up to 39 without taking cutscenes, segmented scenarios, and bifurcation into account; however, unlike IftU, every episode is a separate campaign in its own right, and I believe that makes the overall experience more enjoyable and less chaotic, balancing-wise.

When I first wrote IftU, my grasp of the English language was as poor as my handle of storytelling in general was, to say the least. This also applies to AtS Episode I up to scenario 9, part 2 — which became the turning point for the campaign’s development when I finally chose to renounce perfectionism and embrace the fun in creation. But I digress. AtS’ prose is all my own output with minor amendments from my playtesters and proofreaders, and an experiment in style wherein I take breaks from mainline conventions on purpose, in a subtle and calculated manner. Attentive players may be able to point out those inflection points from just paying attention at the characters and their interactions — characters whose flaws and mistakes are not as detached from reality as the game’s fantasy setting or the subtext-based delivery may suggest.

The three-episodes structure was mostly an afterthought. AtS episode III became an amalgamation of a previous planned AtS sequel and an aborted IftU prequel. But this structure fits the narrative better than the original plan. Episode I establishes the setting and motivations for the protagonists, and provides more hints about the overarching plot than IftU did; Episode II gradually develops further on the characters’ inner struggles while providing entertaining gameplay and dropping even more hints about the grand scheme; and finally, on Episode III things go off the rails in pretty much every way possible—including gameplay—and the plot reaches its final resolution within the scope originally intended for AtS.

Reception and expectations

Some people will be unable to find or interpret the hints and may see the finale as an out-of-the-blue succession of events, all because I avoided indulging in long and heavy exposition sequences that leave nothing to the player’s imagination and reading skills. I am perfectly aware that this is an inevitability, because it is absolutely impossible to please everyone, as I have learned from my experience with activities otherwise wholly orthogonal to the storytelling field. I think some UMC authors should really keep this in mind whenever they feel tempted to abandon their efforts just because a vocal segment of their players doesn’t like their output.

Other people will not like AtS because “it’s not like IftU”. Perfectionism aside, it is impossible for it to be like IftU after all that I have learned in the meantime about storytelling, life, people, and myself. The circumstances under which IftU was created were entirely unique and I would have to trade many things which I have gained or lost since then in order to create another IftU — and I would not be pleased by the result in the end.

I think AtS works just fine as an IftU sequel, and a sequel does not have to fully embrace the spirit of the original to be such. It’s not like AtS isn’t littered with callbacks to IftU in direct and meta levels anyway. There are a lot of things in it to enjoy, and a lot of things to hate — and both are part of the plan!

But in the end, all that matters to me is that I like the finished product, had fun making it, and learned lots of things along the road.

What’s next?

For those who might think that AtS’ finale is a definitive conclusion to the involved characters’ respective arcs: no, it is not — but I allotted a specific amount of time and scenarios for telling their origin stories, and the campaign had to end at some point. Is there enough material for sequels? Hell, yes, but I don’t see myself making another Wesnoth campaign given all the technical and non-technical limitations imposed by the platform. The three ultimate protagonists have a whole journey ahead of them (as well as more characters to meet), and I would like to explore that in some other medium in the future. For fellow Wesnoth UMC authors, though, there is plenty of material left to work with if you pay attention to every single minor detail.

Of course, I am open to questions about everything you may want to know about the campaign, be it via forum PM, or posts in the campaign’s development topic. But I would appreciate it if people didn’t post topics for every single thing in Writers’ Forum — when that happens, odds are I will just ignore those topics in their entirety and not take the effort seriously. As a matter of principle, if you want to ask a campaign author about their work, you ask them directly through their official communication channels instead of walking to the closest park holding a massive sign in your hand.

With AtS 0.9.0 released, all I have left to do is to take care of fine-tuning scenario and unit balance, fixing any remaining prose issues (especially those annoying unit type descriptions for the in-game help system), dealing with missing/placeholder/subpar pixel art, and somehow find a portrait artist willing to work under my specific terms. The latter part will probably take ages, so don’t hold your breath waiting for AtS 1.0.0.

Finally...

I will be forever grateful to the people (and pets) who helped me along this arduous and extended quest, even those who did so unwittingly — if you are reading this, odds are that you know who you are.

Ever since I got my first laptop, I always wondered what kind of people are actually meant to use the ‘tap’ capability built into touchpads.

I remember being stumped at first when I would try to do normal things (on Windows) only to get continuously interrupted by some unexplained drag-and-drop action or accidental click. There was absolutely nothing anywhere evident explaining that ‘tapping’ was a thing I was expected to do, and in fact, was accidentally doing repeatedly. It took me some experimentation to get the gist of the interface and its configuration, and as soon as I understood it better I went and disabled it.

Later when switching (back) to Linux I would encounter the same issue and I’d have to check the X.org synaptics driver man page to figure out the necessary options to add to /etc/X11/xorg.conf to get rid of that questionable ‘feature’. Fortunately, nowadays I can just install the kde-config-touchpad package on Debian and configure everything according to my preferences through a more user-friendly mechanism.

Alas, just like the power button, that laptop’s touchpad buttons were not designed for daily use — thankfully KDE (like Windows) includes a feature to control the pointer with the keyboard.

What touch-based input lacks is intrinsic tactile feedback. Unlike button or key-presses, I cannot feel the action of tapping a touchpad any more than I can feel the action of casually or accidentally sticking my finger on it. A very small amount of pressure counts as a tap, whereas pressing a normal touchpad or mouse button requires a much larger amount, and with good reason; that pressure could be the difference between holding back an embarrassing post and inadvertently submitting it, or (when dragging gestures are enabled) ruining one’s system by dragging half of the Windows system files to the wrong place. Fortunately, most of my experiences in that regard have been of the “cannot drop a folder onto itself” kind.

There is also the issue that not everyone has the common sense to keep their hands clean at all times. Helping other laptop users can be a quite unpleasant experience without a mouse handy — yuck.

And then there are cell phones. Touchscreens are becoming irritatingly common (even in the low-end market) and they are even worse than touchpads in most respects. Sure, you can see what you are going to tap right there, but is that any help against “fat finger” mistakes? Scrolling gestures are a hit or miss action by design, and it is awfully easy to screw up the timing and have the device register it as a tap. These are your time and money in the balance here; you do not want to miss while navigating cell phone menus. And exactly how am I supposed to point other people at some element on the screen without activating it or giving up accuracy and pointing from afar?

In any case, I am pretty sure I do not want to be forced to use a touchscreen-driven phone to call an ambulance.

The obscene joke that is Windows 8 was clearly intended to push OEMs to produce more and more touchscreen-based garbage, as if we needed to merge phone and tablet user interface paradigms with desktop and laptop computers. I could rant some more about Windows 8, but that would take at least a dozen more paragraphs for a worthless tangent. The only thing that matters is that Microsoft will most certainly succeed regardless of the platform’s user experience merits (of which there are basically none compared to Windows 7), simply because of the OEMs and software vendors’ obligation to lick Microsoft’s smelly boots. I can see non-touch monitors becoming increasingly rarer in the future for this reason; I can see the pain for user interface toolkit developers and clients (users) in a few years when everything is required to support touch input — yes, even Wesnoth.

And in this case, mice are not an optimal alternative. I have tried the Windows 8 Release Preview on virtual machines with a mouse and I can safely say I find any other version from 3.0 through Windows 7 to be more efficient for daily use. I guess your mileage may vary after a while, though. Perhaps technologically-challenged kids and elders will learn to use the “modern desktop” faster and better than us veterans?

Not my business anyway — as long as the KDE crowd doesn’t fuck up, that is. Let us all hope Canonical doesn’t fuck up in a similar way too, because everyone likes to imitate Ubuntu nowadays.

My initial experience with computers was through MS-DOS and old-fashioned keyboards. I grew up using standard two-button mice in tandem with more old-fashioned keyboards, and later upgraded my experience with scrolling wheels, and wireless mice. Scrolling wheels are great for web browsing, coding, examining lengthy terminal output buffers, and even drawing, but the cheaper mice out there don’t really last very long, and lint and hair do not really help matters when the design doesn’t include a simple way to access and clean the wheel’s gear.

Wireless mice are apparently a love-it-or-hate-it subject. My current mouse has lasted over a year, it is large enough to work well as a laptop mouse and a desktop mouse, and the battery charge tends to last nearly a month. I bought a couple of rechargeable AA batteries with it and I always keep one fully charged for quick replacement; the mouse only uses one battery at a time. All of my previous mice used two AAA batteries, which lasted significantly less. Their scroll wheels were evidently not designed to last either.

I am pretty sure I will not be buying any touch-based devices for the foreseeable future. My new cell phone is more than enough to keep my patience below nominal levels for a while, at least when using it; I am grateful that the manufacturer was kind enough to include a physical numeric keypad along with two actual call and dismiss buttons.

UPDATE 2012-11-30: The problem described in this post no longer applies since yesterday 2012-11-29, as the ia32-libs* multiarch transitionals have finally landed in Testing. Installing libgl1-nvidia-glx:i386 after previously installing the rest of the NVIDIA stack from Experimental appears to work flawlessly.

Quite notably, everything is working fine with the latest Debian wheezy packages (although I compiled my own newer kernel later anyway) except for the onboard sound controller.

Ah! But not so fast! I had forgotten that Debian wheezy’s half-baked multiarch support has serious implications for 32-bit OpenGL-based software on the amd64 platform (a.k.a. x86_64 for everyone else), regardless of whether one is using a proprietary (e.g. NVIDIA) or free (Mesa) stack. In Reicore’s (Mesa) case, this meant that I had to stick to the version from ia32-libs in Testing, which is Mesa 7.7.1 — contrast with the native version, which is 8.0.4.

In Nanacore’s case, the implications span even more packages. The description of the libgl1-nvidia-glx-ia32 package in Testing (amd64 arch) says:

This is an empty transitional package to aid switching to multiarch.
Run the following commands to install the multiarch library:

dpkg --add-architecture i386 ; apt-get update

apt-get install libgl1-nvidia-glx:i386

And, surprise, surprise. That doesn’t work in Testing because of bug #686033 — fortunately for me, apt-get was wiser in blocking the operation due to some perceived conflicts.

In an attempt to solve this, I pulled the NVIDIA driver packages from Unstable and then tried to install libgl1-nvidia-glx:i386 again to no avail — it requires me to upgrade ia32-libs from the version in Testing to the one in Unstable, which is really a multiarch transition metapackage. After watching multiarch in Debian wheezy become such a major disappointment over time, I decided to do something different with libgl1-nvidia-glx:i386.

I decided to install it by hand.

The procedure was a little convoluted and involved a lot of symbolic links, and I’m not completely sure whether what I did works because I don’t have Wine installed right now and I don’t really want to install Debian’s packages because—again—they use multiarch support to pull nearly 92 MiB worth of redundant crap:

Not to mention that by pulling Mesa it might as well break my little patchwork setup with NVIDIA’s 32-bit libGL here. This is not something I’m too keen on trying out while stuck on shitty 3.5G mobile broadband.

I intend to revisit and unravel this conundrum at a later point and try to understand and document my libGL installation solution but, again, I have bigger fish to fry.

The Intel HDA codec for these controllers is apparently not quite ready yet; as a result, the mixer sliders are slightly broken in that there is no master channel, the speaker channel has no actual volume slider, changing the PCM channel’s volume causes some slight noise, and I suspect some features are missing as well. Despite this, the driver works for basic usage given some precautions with the KDE sound system to choose the correct (PCM) channel for audio instead of the sliderless (speaker) channel. I would not mind to spend some additional time researching the situation later, but I really need to get back to work on non-audio stuff (a.k.a. AtS) right now, so that will have to wait.

Incidentally, the Debian KDE desktop task includes PulseAudio now (I believe this wasn’t the case with Squeeze). Rather unsurprisingly, PA continues to be a considerable annoyance for my usage (e.g. lockup during KDE login, 1% extra CPU usage during playback from any application), so I ditched it after a day or two for plain ALSA. I don’t really have a need for the extra layer of indirection since PA uses ALSA anyway and my sound needs are very basic — basically, just playing sound from media players, games, and application notifications.

For graphics I’m using a decidedly inexpensive NVIDIA graphics card for the sake of having an NVIDIA graphics card and parting ways with Mesa for a good while. And while I had intended from the get-go to install the proprietary drivers, a forced and thankfully short Nouveau intermission confirmed that Nouveau indeed eats kittens. And that’s really all there is to say on the matter.

Both the machine and Debian wheezy can do UEFI, but I quickly stumbled upon a couple of issues:

Using the EFI version of GRUB means the only way to get a working text console on Linux is to use a framebuffer console driver such as efifb. This is not supported by NVIDIA and the driver complains quite loudly about it.

The machine appears to enumerate my (USB) 3G modem’s built-in storage as the first and second hard disks when it is connected, breaking GRUB’s expectations about the location of the disk from which it will boot, which becomes the third (SATA) hard disk in such a situation. The PC BIOS version of GRUB only gets to see the real hard disk drive.

Since this is my first time dealing with an UEFI-based system yet, I don’t really know whether the second point is a bug in GRUB, or the platform itself. Regardless, the first point pretty much convinced me to not spend any further time on that and just go back to the BIOS flavor of GRUB. This doesn’t seem to have done anything for my broken Windows 7 installation, which I probably don’t really need.

I have been working on transferring my configuration and files from Reicore since this Monday, approximately, and I think I’m nearly ready to get back to business now.

(I actually wanted to post this on the 7th but I got sidetracked by the system migration and testing.)

First there was an old (1997) Windows 95 OSR 2 box boasting a P55C Intel Pentium processor with an staggering clock frequency of 166 MHz; 16 MiB of RAM (later expanded to 32), a 1.2 GB* hard disk; it had an onboard S3 Trio64V+ with 1 MiB of video RAM.* Hard-disk manufacturer ‘gigabytes’.

Then, there was another OEM machine (2002), running Windows XP on a 1.3 GHz Intel Celeron (“Celeron-S”) including 256 MiB of RAM and a 40 GB hard disk; before it was decommissioned for good, it ran both Windows XP SP2 and openSUSE 10.0; it was the first machine on which I ever installed Linux (SUSE Linux 9.3), and my original introduction to Wesnoth (0.9.5 from openSUSE 10.0) happened there; the onboard Intel 810E IGP became the victim of various Linux graphics-related shenanigans. (This was the last computer I ever owned that included a 3.5" floppy disk drive; unfortunately, it was broken and it took me a year and various casualties to figure this out.)

Later during 2006, Blackcore appeared: another OEM machine running Windows XP SP2, equipped with a 2.6 GHz Intel Pentium 4 (Prescott) with Hyper-Threading; 1 GiB of RAM, a 160 GB hard disk, and an IGP from the biggest piece of shit chipset manufacturer otherwise known as VIA. This was my first named computer, a practice which has truly paid off to this day. It currently runs the same original installation of Windows XP upgraded to SP3; it has run various Linux distributions and versions and I’ve not stuck with any of them simply because VIA is the biggest piece of shit chipset manufacturer.

Following the color-themed naming scheme, Greycore became the first laptop I ever owned around mid 2007; an Acer Aspire 5050 including an AMD Turion 64 MK-36, an amount of RAM I don’t remember anymore, 80 GiB hard disk drive, and Windows Vista. It first ran openSUSE 10.2 and openSUSE 10.3 besides Windows for a long time, until I got fed up with an incident involving a security update utterly ruining my system with terrible timing. It took a while before I finally decided to switch to another distribution instead of keeping the same old 10.3 installation around, but it was worth it — Debian Lenny (testing at the time, Q3 2008) was my choice and I have stuck with Debian ever since.

Bluecore started with 2 GiB of RAM and ended up with 4 GiB as Wesnoth began to demand significantly more memory for compiling. The 2 GHz dual core AMD Athlon 64 performed very well at the beginning, but our favorite open source game’s development largely outpaced it. The 250 GB hard disk served me well despite running into low space situations in various opportunities as I began to experiment with the processor’s hardware-assisted virtualization capabilities. This overheating beast (51 °C - 64 °C idle, 65 °C - 92 °C under load) has only run Debian as its native operating system besides Windows — first Lenny (testing, later stable), then Squeeze (testing), and very recently, Wheezy (testing). The ATI Radeon HD 3200 was an infinite source of frustration when it came to OpenGL on Linux until very late 2009.

Its untimely and infuriatingly IGP-driven demise resulted in Reicore taking over; first temporarily, and then permanently as its 2.1 GHz dual core Intel Pentium T4300 and Intel GM45 graphics processor ended up proving far more worthwhile than Bluecore’s AMD-based configuration. Reicore (an HP Pavilion dv4-1624la) was purchased for someone else at first, and ran Windows 7 until she became mine, and then I proceeded to wipe it out to make room for Debian — first Squeeze (stable), and now Wheezy (testing). I have never run out of space with its 500 GB hard disk, and even today my /home partition has a little more than 50% of free space. It helps that the processor’s lack of virtualization capabilities has not been very encouraging in the virtual machine department, I guess.

dpkg: warning: 'ldconfig' not found in PATH or not executable
dpkg: warning: 'start-stop-daemon' not found in PATH or not executable
dpkg: error: 2 expected programs not found in PATH or not executable

Years ago, I donated my TV to someone else at home who needed it, just so we didn’t have to buy a new one and give up on buying the laptop that came to be Bluecore. More recently, everyone at home decided to buy new TVs for their own use to replace the old CRTs we’d been using for the last decade — so I was finally left as the one TV-less denizen of this fortress of insanity.

That’s no longer the case.

I have a TV of my own again, decidedly cheap as we didn’t buy it for ourselves, and it works — not that I couldn’t watch TV when I wanted already, since my desktop computer (Blackcore) does have an analog TV capture card that works nicely on Windows (Linux, namely X.org, on VIA IGPs is an atrocious disaster). Most notably, this is the only TV that came with VGA and audio cords, so I can also connect my laptops (or my desktop) to it, although I can only wonder why there’s no HDMI cord when that appears to be what cool kids use nowadays.

It’s not too useful either way, for the maximum resolution such a setup can handle is 1360x768 (albeit the manual claims 1366x768) — compare against Reicore and Bluecore’s native LCD panel resolution of 1280x800. I don’t care much about its purportedly actual function of serving as an entertainment device since local TV kind of died for me circa 2006, when it stopped broadcasting anything of my interest. I spend more time on computers doing work than watching videos or movies anyway.

As an aside note, it appears KDE SC 4.6.5 or X.org server 1.11 (in Debian wheezy right now) don’t handle display output switching very gracefully in regards to window geometries, and windows tend to become 0x0 (when plugging in the TV) or get tossed offscreen (when unplugging the TV). I won’t claim KDE’s window manager is perfect, but in my experience X.org is not the best piece of software found in current general-purpose Linux distributions and it could as well be literally made of explosives and still do its job in the most basic configurations.

As I previously reported, reicore’s hard disk might be dying. This is not surprising in the least, since I have been hearing loud noises from the drive during high activity since a long while — playing vanilla Minecraft is apparently the most straining task for it for some reason. Considering reicore didn’t present any such issues at the beginning, it’s possible the person who gave her to me when bluecore croaked did something bad to her while I wasn’t looking. He’s unlikely to confirm such a thing, though.

A short drive self-test last night revealed that one of the bad blocks is currently part of the very root filesystem. Since I started to consistently hit a bad block while logging into KDE, I decided to run e2fsck -c on all partitions from a Grml live CD system (also Debian-based).

(Yes, I just realized I inadvertently left the swap partition out of the emergency check procedure. I just ran badblocks on it in read-only mode and it appears to be clean.)

Scanning a 466 GiB hard disk for bad sectors isn’t a quick task, but it took just as long as it did with the 1.18 GiB disk on my first computer back in 1998 — about two hours and some minutes — and here are the good news: there are only two damaged sectors, both of them within the root filesystem, and only one file was lost: /usr/lib/libQtDesigner.so.4.7.3, provided by the libqt4-designer package in Debian wheezy.

Why the dynamic linker felt it necessary to access this file while launching Akregator and KMail during the login process is beyond me, but it’s good that nothing was lost, as I promptly reinstalled the package. Either way, I had a fresh pre-upgrade backup in my external 2 TiB hard disk drive ready just in case.

I’ll certainly have to get used to making backups more often than my previous, sloppy biweeklyn-weekly schedule. Thanks goodness for rsnapshot!

The last time I tried to switch from Debian stable to testing I was hit by a disease known as X.org Server 1.10, and more specifically, fd.o bug #31017, which made my KDE experience less than worthwhile, to say the last. The solution I devised was not particularly productive either.

However, X.org Server 1.11.1 hit Testing a while ago, and it’s what I am using right now — I successfully switched to “wheezy” again, today, and this time it would seem a massive rollback won’t be required, since the aforementioned bug is finally fixed.

I figured that fixing the recordMyDesktop breakage in my Debian installation would be worth the extra effort after all, so I made a complete backup of my system and switched all installed packages I could to their debian-multimedia.org counterparts. That is not to say that this road is covered with rainbows and populated by puppies; dm.o’s version of the mplayer package conflicts with Debian’s mplayer-gui package because of one icon file provided by both, so I had to remove mplayer-gui to complete the “upgrade”.

Based on what I’ve heard about dm.o this is par for the course, though.

This repository does not provide custom versions of rMD or its Gtk+ GUI, so I had to stick to the Ubuntu 11.04 (Natty) version of gtk-recordmydesktop (link) to be able to do some configuration again.

The outcome? Success. recordMyDesktop’s videos finally work with both YouTube and ffmpeg, which I had to use anyway to reduce the output’s size from about 250 MiB to 50 MiB so it wouldn’t take hours or days to upload:

It turns out that ALSA doesn’t expose a hardware loopback capture control for me for some reason and that is exactly what I need to be able to capture audio from ALSA applications! I haven’t figured out whether it’s possible to intercept the crap sent to the sound card in software so I’ll just settle for silent captures for now. I could use the laptop’s builtin microphone if I wanted, but I tend to be in really noisy environments where anything can happen in the background while I’m recording.

It’s a real shame that ALSA can’t detect (or reicore’s onboard sound controller doesn’t expose) a loopback control. If anyone knows any method to do the aforementioned interception without using PulseAudio (or Jack, which Minecraft/LWJGL does not support), I’d really love to hear about it.